You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Supervised probabilistic latent semantic analysis with applications to controversy analysis of legislative bills

Abstract

Probabilistic Latent Semantic Analysis (PLSA) is a fundamental text analysis technique that models each word in a document as a sample from a mixture of topics. PLSA is the precursor of probabilistic topic models including Latent Dirichlet Allocation (LDA). PLSA, LDA and their numerous extensions have been successfully applied to many text mining and retrieval tasks. One important extension of LDA is supervised LDA (sLDA), which distinguishes itself from most topic models in that it is supervised. However, to the best of our knowledge, no prior work extends PLSA in a similar manner sLDA extends LDA by jointly modeling the contents and the responses of documents. In this paper, we propose supervised PLSA (sPLSA) which can efficiently infer latent topics and their factorized response values from the contents and the responses of documents. The major challenge lies in estimating a document’s topic distribution which is a constrained probability that is dictated by both the content and the response of the document. To tackle this challenge, we introduce an auxiliary variable to transform the constrained optimization problem to an unconstrained optimization problem. This allows us to derive an efficient Expectation and Maximization (EM) algorithm for parameter estimation. Compared to sLDA, sPLSA converges much faster and requires less hyperparameter tuning, while performing similarly on topic modeling and better in response factorization. This makes sPLSA an appealing choice for latent response analysis such as ranking latent topics by their factorized response values. We apply the proposed sPLSA model to analyze the controversy of bills from the United States Congress. We demonstrate the effectiveness of our model by identifying contentious legislative issues.

1.Introduction

Hofmann [1] introduced Probabilistic Latent Semantic Analysis (PLSA), which is also known as Probabilistic Latent Semantic Indexing (PLSI) when used in information retrieval and text mining [2]. The basic idea of PLSA is to treat the words in each document as observations from a mixture model where the components of the model are word distributions for latent topics. The selection of the latent topics is controlled by a set of mixing weights such that words in the same document share the same mixing weights. PLSA was initially proposed for text-based applications that do indexing, retrieval, mining, and clustering. Later, its use was expanded to other fields including collaborative filtering [3], computer vision [4], and audio processing [5].

PLSA can be viewed as a probabilistic version of the seminal work on latent semantic analysis [6], which revealed the utility of the singular value decomposition of the document-term matrix. PLSA is the precursor of probabilistic topic models which are widely used nowadays including Latent Dirichlet Allocation (LDA) [7]. The basic generative processes of PLSA and LDA are very similar. In PLSA, the topic mixture is conditioned on each document, while the topic mixture in LDA is drawn from a conjugate Dirichlet prior. Theoretically, PLSA is equivalent to MAP estimated LDA under a uniform prior [8]. The PLSA model does not make any assumptions about how the mixture weights are generated and thus its generative semantics are not well defined [7]. Consequently, there is no natural way to predict a previously unseen document. On the other hand, the LDA model is more complex and cannot be solved by exact inference. Gibbs sampling [9] and variational inference [7] are often used for inference in LDA type of topic models. However, these methods scale poorly to large datasets. Variational inference requires dozens of expensive passes over the entire dataset, and Gibbs sampling requires multiple Markov chains [10]. In contrast, the parameter estimation and inference of PLSA can be efficiently done by the EM algorithm.

PLSA and LDA are the two most representative topic models. Various empirical comparisons have been conducted between them. Blei et al. [7] shows that LDA outperforms PLSA in the perplexity of new documents. On the other hand, Lu et al. [11] conduct a systematic empirical study of PLSA and LDA on three representative IR tasks, including document clustering, text categorization, and ad-hoc retrieval. They found that LDA and PLSA tend to perform similarly on these tasks. Furthermore, the performance of LDA on all tasks is quite sensitive to the setting of its hyperparameters, and the optimal setting of hyperparameters varies according to how the model is used in a task.

The original PLSA and LDA models as well as most of their variants are unsupervised models. Many real-world text documents are associated with a response variable connected to each document such as the number of stars given to a movie, the number of times a news article was downloaded, or the category of a document. Incorporating such information into latent aspect modeling could guide a topic model towards discovering semantically more salient statistical patterns that may be more interesting or relevant to the user’s task. Thus, a very important extension of LDA is supervised LDA (sLDA) [12]. sLDA jointly models the content and responses of documents in order to find latent topics that best predict the responses of documents.

In this paper, we propose supervised Probabilistic Latent Semantic Analysis (sPLSA) by extending PLSA to learn from the responses of documents. For PLSA, our proposed model is the analog of what sLDA is to LDA. The major challenge lies in estimating a document’s topic distribution which is a constrained probability that is dictated by both the content and the response of the document. We introduce an auxiliary variable to transform the constrained optimization problem to an unconstrained optimization problem. This allows us to derive an efficient EM algorithm to estimate the parameters of our model. Compared to sLDA, sPLSA is much more efficient and requires less hyperparameter tuning, while performing similarly on topic modeling and better in response factorization. This makes sPLSA the ideal choice for latent response analysis such as ranking latent topics by their factorized response values. We utilize the sPLSA model to analyze the controversy of bills from the United States Congress. We demonstrate the effectiveness of our model by identifying contentious legislative issues. The contributions of the paper can be summarized as follows.

  • We propose a novel supervised PLSA model which can efficiently infer latent topics and their factorized response values from the contents and the responses of documents.

  • We derive an efficient EM algorithm to estimate the parameters of the model.

  • We utilize sPLSA and sLDA to analyze the controversy of bills from the United States Congress. We demonstrate the effectiveness of sPLSA over sLDA as part of this analysis.

2.Related work

2.1Probabilistic topic models

In 1999, three papers [1, 2, 13] introduced the model of Probabilistic Latent Semantic Analysis. One variant of the model appeared in 1998 [14] and all these models were originally discussed in an earlier technical report [15]. PLSA was a probabilistic implementation of latent semantic analysis (LSA) introduced by Deerwester et al. [6]. LSA was extended from the vector space model. It aimed to represent documents in a low dimensional vector space consisting of common semantic factors. Differing from LSA in projecting document or word vectors into the latent semantic space, PLSA extracted the aspects related to documents. This aspect model was interpreted as a mixture model containing latent semantic mixtures. Parameters of mixture probabilities were estimated by the maximum-likelihood (ML) principle. PLSA did not provide a straightforward way to make inferences about new documents not seen in the training data and the parameterization of the model was susceptible to overfitting. Latent Dirichlet Allocation (LDA) addressed these limitations by proposing a Bayesian probabilistic topic model.

PLSA and LDA established the field of probabilistic topic models. Many extensions of the two basic models have been proposed. In Zhai et al. [16], PLSA was extended to include a background component to explain the non-informative background words and a cross-collection mixture model was proposed to support comparative text mining. Mei and Zhai [17] propose a general contextual text mining model which is an extension of PLSA to incorporate context information. They further regularize PLSA with a harmonic regularizer based on a graph structure in the data [18]. One active area of topic modeling research is how to relax and extend the assumptions of PLSA and LDA to uncover more sophisticated structure in the texts. For example, the work by Rosen-Zvi et al. [19] extends LDA to include authorship information. Recently, probabilistic topic models are proposed for unsupervised many-to-many object matching [20] and cross-lingual tasks [21]. There are many other topic models proposed. Blei [22] gives an overview of the field of probabilistic topic models.

The original PLSA and LDA and most of their variants are unsupervised models. Blei and McAuliffe [12] proposed supervised LDA (sLDA) to capture real-valued document rating as a regression response. The generative process of sLDA is similar to LDA, but with an additional step: draw a reponse variable. The sLDA model is trained by maximizing the joint likelihood of the contents and the responses of documents. They tested sLDA on two real-world datasets: movie reviews with ratings and web pages with popularity, and the experimental results demonstrated the advantages of sLDA versus regularized regression, and versus an unsupervised LDA analysis followed by a separate regression. Other extensions include multi-class sLDA [23], which directly captures discrete labels of documents as a classification response; and discriminative LDA (DiscLDA) [24], which also performs classification, but with a mechanism different from that of sLDA; and MedLDA [25], which leverages the maximum margin principle for estimation of latent topical representations. Recently, Jameel et al. [26] integrate class label information and word order structure into a supervised topic model for document classification. More variants of supervised topic models can be found in a number of applied domains, such as Labeled LDA [27], automatic summarization of changes in dynamic text collections [28], modeling of numerical time series [29], inferring topic hierarchies [30], and query expansion [31]. In computer vision, several supervised topic models have been designed for understanding complex scene images [32, 33]. Mimno and McCallum [34] also proposed a topic model for considering document-level meta-data; for example, publication date and venue of a paper.

Most of the above supervised topic models are based on LDA. There exist very few work on extending PLSA to the supervised setting. One such work was to use the spoken content of a multimedia document as a query for retrieving similar or relevant documents [35]. The query was used to train the model in a supervised fashion with respect to a query-document similarity objective function. Fergus et al. [36] extend PLSA to include spatial information in a translation and scale invariant manner, and utilized this modified PLSA model to learn an object category. Another work added a category-topic distribution in PLSA for human action recognition [37]. However, these models do not associate the topic distribution of the document with the response variable. Consequently, the discovered topics may not be indicative of the response. Aliyanto et al. [38] proposed a version of supervised PLSA to estimating technology readiness level, but they assumed the topics of each word in a document are observed which are actually not available in many real-world applications. In this paper, we follow the way LDA was extended to sLDA by directly associating the documents’ topic distributions with the response. The response is at the document level instead of the word level and it is more readily accessible. The learned topics depend on both the document’s content and response. To the best of our knowledge, no prior work has extended PLSA in a similar manner.

Recently, with the rise of deep learning, novel topic models based on neural networks have been proposed. Salakhutdinov and Hinton [39] proposed a two layer restricted Boltzmann machine (RBM) called the replicated-softmax to extract low level latent topics from a large collection of unstructured documents. Larochelle and Lauly [40] proposed a neural auto-regressive topic model inspired from the replicated softmax model but replacing the RBM model with a neural auto-regressive distribution estimator (NADE). Kingma and Welling [41] proposed variational autoencoders by combining topic modeling and neural networks. Cao et al. [42] proposed neural topic model (NTM), and it is supervised extension (sNTM) where words and documents embedding are combined. Moody [43] proposed the lda2vec, a model combining LDA and word embeddings. Dieng et al. [44] integrated to a recurrent neural network based language model global word semantic information extracted using a probabilistic topic model. Gupta et al. [45] integrated to an LSTM recurrent neural network, a neural auto-regressive topic model. Murakami and Chakraborty [46] investigated the use of word embedding with NTM for interpretable topics from short texts. Grootendorst [47] proposed BERTopic to generate document embedding with pre-trained transformer-based language models and then produce topic representations with the class-based TF-IDF procedure. Two recent surveys [48, 49] provided comprehensive reviews of neural topic models, with nearly a hundred models developed and a wide range of applications in neural language understanding such as text generation, summarization and language models. Despite the popularity of deep learning, our work has focused on traditional probabilistic methods because they are often easier to implement and more efficient to train, which may be more suitable in resource constrained environments where only limited computation and storage are accessible. Nevertheless we will explore to combine the proposed model with neural networks in a future work.

2.2Controversy analysis of legislative bills

Legislative voting is a major area of research. Most of the research is focused on the ideal point estimation of the ideological positions of legislators. This is primarily for the purpose of predicting their voting patterns. An early work in this area presented a spatial model of legislative voting [50]. Londregan [51] estimated the preferred positions of legislators by modeling the legislative agenda. Cox and Poole [52] used a spatial model to assess the role of partisanship in influencing the votes of legislators. Variational methods were applied to predict votes [53]. Thomas et al. [54] modeled voting behavior from congressional debate transcripts. Gerrish and Blei [55] demonstrated roll call predictive models which link legislative text with legislative sentiment. They [56] further derived approximate posterior inference algorithms based on variational methods to predict the positions of legislators. Fang et al. [57] analyzed public statements from legislators to build a contrastive opinion model of the legislators. Gu et al. [58] conducted ideal point estimations of legislators on the latent topics of voted documents.

Some of the work cited above utilized topic models. For example, Gerrish and Blei [55] extended LDA to build a generative model of votes and bills called the ideal point topic model. The model infers two bill related latent variables. One of the latent variables explains bills that all legislators will vote for or against while the other variable explains bills that do not have unanimous approval or disapproval. In addition, the model infers a latent variable for the legislators’ ideal points. Another example, Fang et al. [57] present the cross-perspective topic model which unifies two identically extended LDA models to contrast the opinion words of a bipolar legislative body. The opinion words reflect the subjective positions of the polar entities on various topics. The model discriminates between opinion words and topics words by treating them as two separate observed variables.

On the broader field of controversy analysis, much work has been done detecting contradictions in textual data. One of the early works studied the dynamics of conflicting opinions in texts by visually inspecting graphs [59]. Tsytsarau et al. [60] further investigated two types of contradictions, namely, “overlapping contradicting opinions” and “change of sentiment”. Many supervised learning approaches have been proposed for classifying texts into one of the two opposing opinions using annotated controversial corpora including sentences [61], documents [62] and document collections [61]. Some recent work addresses the task of identifying controversial contents on Wikipedia [63, 64, 65] and on social media [66, 67, 68].

Table 1

Notations

𝒟 Corpus of documents d A document in 𝒟
w A word that occurs in 𝒟 k A topic
n(d,w) Count of word w in d K Total number of topics
N Total number of words Nd Number of words in d
M Total number of documents θdk P(k|d)
𝜽𝒅 Topic distribution of d 𝚯 Matrix of all θdk
βkw P(w|k) 𝜷𝒌 Word distribution of k
𝜷 Matrix of all βkw Zdn Topic of the nth word in d
Wdn nth word in document d 𝑾 Matrix of all Wdn
cd Response of d 𝒄 Vector of all cd
vk Regression coefficient on topic k 𝒗 Vector of all vk
σ2 Variance of the Gaussian noise

Figure 1.

Graphical model representation of (a) PLSA and (b) sPLSA.

Graphical model representation of (a) PLSA and (b) sPLSA.

3.Supervised PLSA

3.1Notations

Assume the corpus 𝒟 contains M documents with K topics. Nd is the number of words in document d. Each document d has two set of observed variables: Wdn, which is the nth word of d; and cd, which is the response of d, such as the rating of a review. Table 1 includes the main notations in the paper.

3.2Generative process

Similar to many other topic models, sPLSA assumes that a document consists of multiple topics. Therefore, there is a distribution θ𝐝 over a fixed number of K topics for each document d. Like PLSA, this distribution is a multinomial distribution where each element θdk in the vector represents the probability that topic k appears in document d, i.e., θdk=P(k|d). In addition, we assume each topic represents a distribution over words w in a fixed vocabulary of size V, denoted by β𝐤. This distribution is also a multinomial distribution where each element βkw represents the probability that term w is chosen by topic k, i.e., βkw=P(w|k).

The essential difference between PLSA and sPLSA lies in the modeling of the response variable cd connected to document d. Under the sPLSA model, each document and response arises from the following generative process:

  • For each word w in document d

    • Choose a topic zdw𝑀𝑢𝑙𝑡𝑖𝑛𝑜𝑚𝑖𝑎𝑙(θ𝐝)

    • Choose a word w𝑀𝑢𝑙𝑡𝑖𝑛𝑜𝑚𝑖𝑎𝑙(β𝐳𝐝𝐰)

  • Draw a response cd𝒩(θ𝐝T𝐯,σ2)

    • Here the response comes from a Gaussian linear model. The mean is the inner product of topic distribution 𝜽𝒅 and coefficient parameter vector 𝒗.

Figure 1 illustrates the graphical model representation of PLSA and sPLSA, respectively.

It is worth noting that our approach for modeling cd is different from that of sLDA. sLDA approximates a response variable, which in our case is cd, as a linear combination of the mean Zdn values. sLDA represents each Zdn=k as an indicator vector of length K where the kth position is set to 1 and the others are set to 0. sLDA evaluates the mean Zdn by taking the mean value of the vectors, which is expressed as 𝐳𝐝¯=n=1NdZdn. In Section 4, we empirically show that using a linear combination of θdk instead of zdk¯ yields vk values that better factorize the response of the latent topics.

3.3Likelihood function

The likelihood function in supervised PLSA consists of two parts. The first part is the likelihood for observing all the words in the corpus, 𝐖, given the topic distributions for the documents, 𝚯, and the word distributions for the topics, β. Mathematically, it is as follows:

(1)
P(𝐖|𝚯,β)=d=1Mwd(k=1Kθdkβkw)n(d,w)

where n(d,w) is the number of times word w appears in document d. Therefore, the log likelihood of the observed words is

(2)
J1(𝚯,β)=d=1Mwdn(d,w)log(k=1Kθdkβkw)

The second part of the likelihood function comes from the likelihood of the response variable. As shown in the generative process, we assume a linear model with Gaussian noise for modeling the response cd. Specifically, we express cd as follows:

(3)
cd𝒩(k=1Kθdkvk,σ2)

where vk is the coefficient for θdk. The expression indicates that cd is a random variable drawn from a Gaussian distribution with a mean k=1Kθdkvk and a variance σ2. The likelihood of observing all the responses is as follows:

(4)
P(𝐜|𝚯,𝐯)=d=1M12πσ2exp(-(cd-k=1Kθdkvk)22σ2)

where 𝐜 is a vector of all cd in the corpus, and 𝐯 is a vector of all vk. vk can be viewed as the contribution of topic k to the overall response. That is the higher a vk value is the more its latent topic contributes to the response variable.

We assume a Gaussian prior on the coefficients vk, i.e., vk𝒩(0,η2), which is equivalent to L2 norm regularization. By ignoring some constants which do not impact the outcome of the likelihood maximization, the log likelihood of observing all the responses can be specified as:

(5)
J2(𝚯,𝐯)=-d=1M(cd-k=1Kθdkvk)22σ2-12η2k=1Kvk2

Equations (2) and (5) share 𝚯 as a parameter to estimate. This means we will need to unify both likelihoods into a single unified likelihood equation in order to estimate 𝚯. We accomplish this by normalizing the two likelihoods, and then linearly combining Eqs (2) and (5) as follows:

(6)
J(𝚯,β,𝐯)=(1-λ)J1(𝚯,β)d=1Mwdn(d,w)+λJ2(𝚯,𝐯)M

where λ is a weighing constant and is a real number λ[0,1]. Its value affects the perplexity of the latent topics, β, inferred by the unified likelihood. We discuss this in Section 4.

3.4Parameter estimation

Now that we have established the unified likelihood, we can use it to derive formulas for iteratively updating the parameters 𝒗, 𝜷, and 𝚯 in order to converge the likelihood to its maximum value. At a high-level, we iteratively update the parameters one at a time until the likelihood converges. We illustrate the process in Fig. 2.

Figure 2.

The iterative updates of the parameter estimation process.

The iterative updates of the parameter estimation process.

3.4.1Updating 𝐯

The values of 𝐯 are only found in the second term of the unified likelihood. This means we can simply use J2(𝚯,𝐯) as the maximization objective to update 𝐯 while fixing 𝚯 and β. If we use vector and matrix representations, maximizing J2(𝚯,𝐯) is equivalent to minimizing the following objective:

(7)
min𝐯(𝐜-𝚯𝐯)T(𝐜-𝚯𝐯)+σ2η2𝐯T𝐯

It can be seen that the above objective function is strictly convex in v by its positive second derivative. By taking the first derivative of the function with respective to v and setting it to zero, we can obtain the analytic solution to 𝐯 as follows

(8)
𝐯=(𝚯𝐓𝚯+σ2η2𝐈)-1𝚯𝐓𝐜

This solution is equivalent to Ridge Regression or Tikhonov regularization [69].

3.4.2Updating β

The values of β are only found in the first term of the unified likelihood. This means we can simply use J1(𝚯,β) as the maximization objective. Similar to PLSA, we can use the EM algorithm to update β while fixing 𝚯 and 𝐯. In the E-step, we apply Bayes’ theorem and estimate the posterior probability of the topic k based on current parameters as follows:

(9)
P(k|d,w)=θdkβkwk=1Kθdkβkw

In the M-step, we maximize the expected complete data log-likelihood as follows:

(10)
maxβE(J1)=d=1Mwdn(d,w)k=1KP(k|d,w)log(θdkβkw)

with the constraint of wdβkw=1. Here P(k|d,w) is obtained from the E-step. By using the Lagrange multiplier method to solve the constrained optimization problem in Eq. (10), we obtain the following update rule for β:

(11)
βkw=d=1Mn(d,w)P(k|d,w)d=1Mwdn(d,w)P(k|d,w)

3.4.3Updating 𝚯

The values of 𝚯 are found in both terms of the unified likelihood. However, it is difficult to maximize J(𝚯,β,𝐯) with respect to 𝚯, because J1(𝚯,β) has a log of sums term that contains θdk. Instead of using J1(𝚯,β) in J(𝚯,β,𝐯) , we use the same lower bound objective that the EM algorithm uses to approximate J1(𝚯,β), which is derived as follows:

(12)
J1(𝚯,β)=d=1Mwdn(d,w)log(k=1Kθdkβkw)=d=1Mwdn(d,w)log(k=1KP(k|d,w)θdkβkwP(k|d,w))d=1Mwdn(d,w)k=1KP(k|d,w)log(θdkβkwP(k|d,w))=d=1Mwdn(d,w)k=1KP(k|d,w)log(θdk)+d=1Mwdn(d,w)k=1KP(k|d,w)log(βkw)-d=1Mwdn(d,w)k=1KP(k|d,w)log(P(k|d,w))

Since the second and third terms in the above lower bound are constants with respect to 𝚯, we can drop them to obtain a simpler lower bound objective for optimizing 𝚯. The objective is as follows:

(13)
J3(𝚯)=d=1Mwdn(d,w)k=1KP(k|d,w)log(θdk)

This means we use the following objective instead of the unified likelihood to update 𝚯:

(14)
JL(𝚯,𝐯)=(1-λ)J3(𝚯)d=1Mwdn(d,w)+λJ2(𝚯,𝐯)M

The above objective is a concave function with respect to 𝚯 when we fix 𝐯. We can solve for the values of 𝚯 that maximize the objective provided that the following constraint is met for every document d.

(15)
k=1Kθdk=1,d𝒟

The constraint must be met because each 𝜽𝒅 is a probability distribution. However, this constraint results in a constrained optimization problem that is hard to solve with a simple closed form expression similar to the constrained optimization problem for estimating βkw (Eq. (11)). This is because the gradient of JL() with respect to θdk (Eq. (25)) yields an expression that consists of all the θdk parameters for all documents and the given k. This makes finding a closed form solution for θdk difficult. To overcome this difficulty, we transform the constrained optimization problem to an unconstrained optimization problem by expressing θdk in terms of a parameter τdk as follows:

(16)
θdk=SOFTMAX(τdk)

where τdk and SOFTMAX() is defined as follows:

(17)
SOFTMAX(τdk)=exp(τdk)k=1Kexp(τdk)

Irrespective of the value of τdk, SOFTMAX(τdk) returns a value in the range of [0,1] and the sum of all SOFTMAX(τdk) for each τdk𝝉𝒅 is always 1. As a result, expressing θdk in terms of SOFTMAX(τdk) innately allows θdk to satisfy the constraint, and effectively transforms the constrained optimization to an unconstrained optimization problem.

Furthermore, we can reduce the number of τdk parameters from K to K-1, because one τdk is redundant since:

(18)
θdk=SOFTMAX(τdk)=1-k=1K-1SOFTMAX(τdk)

when k=K. To remove the redundant parameter, we note that subtracting a value from all τdk does not change the value of SOFTMAX(.):

(19)
SOFTMAX(τdk-h)=exp(τdk-h)k=1Kexp(τdk-h)=exp(τdk)exp(-h)k=1Kexp(τdk)exp(-h)=exp(τdk)k=1Kexp(τdk)=SOFTMAX(τdk)

As a result, we can express τdk with an auxiliary parameter μdk that is as follows:

(20)
μdk=τdk-τdK

This results in μdK=0, which eliminates μdK for being an additional parameter of 𝝁𝒅. Therefore, SOFTMAX(.) simplifies to the following when 1kK-1:

(21)
SOFTMAX(μdk)=exp(μdk)1+k=1Kexp(μdk)

and to the following when k=K:

(22)
SOFTMAX(0)=11+k=1Kexp(μdk)

Finally, we can express θdk as follows in terms of 𝝁𝒅:

(23)
θdk={exp(μdk)1+k=1K-1exp(μdk)if 1kK-111+k=1K-1exp(μdk)if k=K

The above representation of θdk ensures Eq. (15) holds. Therefore, instead of doing a constrained maximization with respect to 𝚯, we perform an unconstrained maximization with respect to μ.

We use the gradient ascent algorithm to maximize the objective function JL(𝚯,𝐯) in Eq. (14) with respect to μ by fixing 𝐯. The partial derivative we use to update each μdk is as follows:

(24)
JLμdk=k=1KJLθdkθdkμdk

where:

(25)
JLθdk=1-λd=1Mwdn(d,w)wDn(d,w)p(k|d,w)θdk+λσ2Md=1M(cd-k=1Kθdkvk)vk

and:

(26)
θdkμdk={θdk(1-θdk)if k=K-θdkθdkif kk

After we update each μdk, we update each θdk using Eq. (23).

3.5Inference

After the parameter estimation is completed, we do the following to infer the latent topics and their factorized response values:

  • We infer the latent topics from the topic-word distribution 𝜷 by ranking the words for each latent topic k in descending order of the probability of the words belonging to the topic (βkw). We then extract the most probable words of the topic to get an intuition about what each latent topic is about. We do this by analyzing the semantics of the extracted words.

  • We infer the factorized response for each latent topic k from its vk value. The larger the vk value, the more dominant the topic is in determining the response variable.

4.Experiments

In this section, we discuss the dataset we used to test sPLSA, present experimental results, and compare our model to the baselines.11

4.1Dataset

We tested sPLSA using bills which were placed for a vote in the United States Congress. The objective of our test is to generate the latent topics of the bills, and then rank them by controversy. We do this by first assigning a controversy score to each bill followed by inferring the factorized controversy score of each topic using sPLSA. We assign a controversy score to each bill by using the spread of the number of yes and no votes. The formula we use is as follows:

(27)
cd=1-|ad-bd|ad+bd

where cd is the controversy score of bill d, ad is the number of yes votes for the bill, and bd is the number of no votes for the bill. A value of 0 indicates no controversy and occurs when the votes are either all yes or all no. A value of 1 indicates maximum controversy and occurs when the number of yes and no votes are evenly split. sPLSA uses the cd value of the bills as the response variable and generates the latent topics of the bills. We use the vk values generated by the model to rank the latent topics by controversy.

The reason why we selected congressional bills and their controversy scores as our dataset is to demonstrate applying sPLSA to a real world problem. Specifically, we want to identify contentious issues in the United States Congress by generating their latent topics. By inferring their relative controversy using sPLSA, we can rank the topics by controversy, and identify the contentious issues by selecting the most controversial topics.

We collected bills starting from the 100th Congress and ending with the 114th Congress. This is for the years 1987 to early 2016. We used the Vote API of GovTrack 22 to obtain information about the votes. Next, we discarded the votes that are not associated with a bill. For votes that are associated with a bill, we kept the final votes for the bill. Finally, we obtained the digital content of the bills from the website of the U.S. Government Publishing Office.33 We only obtained the content of bills that had a plain text version.

We were able to collect the votes and content of 6,403 bills. 5,531 bills were from the House of Representatives and 872 bills were from the Senate. 6,160 bills had more yes votes than no votes, and 243 bills had more no votes than yes votes. Figure 3 shows the distribution of the bills’ controversy score.

Figure 3.

Histogram of the distribution of the response variable calculated using Eq. (27).

Histogram of the distribution of the response variable calculated using Eq. (27).

We did the following preprocessing of the bills to create our dataset:

  • Removed words which have characters that are not in the English alphabet.

  • Removed words less than 4 characters in length.

  • Removed common English words using Mallet’s44 stop-word list.

  • Removed domain specific words using a custom stop-word list. The stop-word list has 157 words, and we created it by analyzing the word frequency of the bills. It mostly consists of legal terms.

  • Selected the 15,000 most frequent words as the vocabulary of our corpus.

We then created the dataset as a bag-of-words representation of each bill.

4.2Setup

We randomly partition our dataset as follows: 80% for training, 10% for validation, and 10% for testing. We initialize μ by sampling from a Gaussian distribution of mean 0 and variance 1. We initialize 𝚯 from the initial values of μ using Eq. (23). We initialize β by sampling from a uniform distribution and then normalizing each β𝐤, the word distribution of topic k, so that it becomes a valid probability distribution. We initialize 𝐯 by setting all vk=0. We initialize values for λ, K, and η depending on the experiment we are running. Our implementation of sPLSA iteratively updates in lockstep 𝐯, β, and μ until the unified likelihood converges. At the beginning of each iteration, we update 𝚯 using Eq. (23).

4.3Evaluation metrics

We test a trained model by folding-in the test dataset similar to the way specified in [1]. This is essentially the same as training the model with the test dataset except β and 𝐯 are not updated, and their values are obtained from the trained model. The only parameter we estimate in the folding-in process is 𝚯. We evaluate the performance of the model with test data as follows:

  • For 𝚯 and β, we use the perplexity of the topics inferred from the test dataset. The lower the perplexity is, the better the values of 𝚯 and β.

  • For 𝐯, we use Pearson correlation to correlate each vk with the average controversy score of the bills which have k as their most probable topic (sk). sk is calculated as follows:

    (28)
    sk=d1{maxθ𝐝=θdk}cdd1{maxθ𝐝=θdk}

    The higher the correlation between 𝐯 and 𝐬 is, the better the values of 𝐯, and the better 𝐯 represents the relative controversy between the topics.

The reason why we can correlate each vk with sk is because the θ𝐝 are sparse. An example of the sparsity is illustrated in Fig. 4 where we aggregate the average probability of the kth order statistic of the topics in all θ𝐝 for K=20 and λ=0.5. We can clearly see from the plot that the 20th order statistic is by far the most dominant topic.

Figure 4.

The sparsity of θ𝐝 by plotting the average probability of the kth order statistic of the topics in all θ𝐝 for K=20 and λ=0.5.

The sparsity of θ𝐝 by plotting the average probability of the kt⁢h order statistic of the topics in all θ𝐝 for K=20 and λ=0.5.

4.4Results

Table 2 shows the top 5 words for the 1st, 2nd, median, 2nd last, and last controversial topics for four experiments. Each experiment selected a unique K{10,20,30,40}, and all the experiments set λ=0.5 and η=1. As we will see later, our choice of λ and η are optimal for our dataset. In addition to the top 5 words, the table shows the vk coefficient of each topic. As we mentioned earlier, the vk values estimate the controversy of the topics and we use them to select the topics shown in the table. For K=10, we find that some of the topics overlap. For example, the words “fundraising”, “expense” and “fisa” (Foreign Intelligence Surveillance) appear in multiple topics. This is because the number of topics is insufficient. On the other hand, K=40 has more granular topics that overlap less. For K=40, we can infer from the words that the 1st topic is about higher education, the 2nd is about funding, the 3rd is about child education, the 4th is either about religion or sequester related budget cuts, and the 5th topic is about housing. We can therefore conclude that K has to be large enough in order to avoid topic overlap. We also observe that as the value of K increases so does the variance of the vk values. This means that the most controversial topics of larger K values are more controversial than the most controversial topics of smaller K values. This makes intuitive sense since the overlap between the most controversial topics and other topics gets smaller as K increases.

Table 2

The top 5 words for the 1st, 2nd, median, 2nd last, and last most controversial topics of selected K values as well as the vk values of the topics

Rank K=10 K=20 K=30 K=40
1 v=1.36 v=1.886 v=1.942 v=2.527
FundraisingFisaAlienEducating
ExpenseBuddhistImmigrantsInstitutes
FisaOutdoorSecuresHigh
AlienFunctionsEmployingStruggle
SecuresLoadedStationingStructuring
2 v=1.145 v=1.268 v=1.885 v=2.262
FundraisingFundraisingFisaPayers
FisaExpenseBuddhistFisa
BuddhistRelocatesExpenseIndispensable
ExpenseAppropriationsFundraisingPaygo
AppropriationsExpendableFunctionsPlain
K2 v=-0.508 v=-0.251 v=0.229 v=0.106
DefendingFinancesForegoneEducation
MilestoneCommissaryInternalSchofield
FisaBoardSecuresLobbying
ForcedPersianNationEducating
ForbsCompanionshipVerifyingChildless
K-1 v=-0.844 v=-1.316 v=-2.381 v=-2.188
PlainPropeneDefendingChances
PropeneTherapeuticsMilestoneHeader
LancasterChaplainsProclaimedChaplains
MammographyLienForbsSequester
TarpFisaResearcherFrederick
K v=-0.977 v=-1.829 v=-3.604 v=-2.569
HealingDrowningCommissaryHouses
HousesPrisonSafeguardingExpense
FundraisingBushelChaplainsProhibited
SecuresMammographyVesselAmounts
PayersSttrTransnationalDistributors

4.4.1Comparison to baseline

Our baseline is an sLDA model. The response variable for the model is the controversy score. We used the ’slda.em’ function in the R “lda” package55 to train the sLDA model using the training dataset. We run the model for each K{10,20,30,40} by setting α=0.1, β=0.1, and variance=0.25. We then correlated the vk and sk values of sLDA and compared the correlation to that of our model when λ=0.5 and η=1. For a fair comparison, we used the training data to evaluate the sk values. This is because unlike sPLSA, sLDA does not have access to the response variable when using test data, since its purpose is to predict the response variable. Table 3 illustrates the comparison. From the table, we clearly see our model correlates significantly better as K increases in value. This is the case because the vk values of sPLSA are trained on the topic distributions, θ𝐝, whereas the equivalent coefficients in sLDA are trained on the realized topic distributions. For sLDA, the variance between the θ𝐝 and the realized topic distributions significantly increases as K increases, and this deteriorates the ability of the sLDA coefficients in approximating the controversy of the θ𝐝.

Table 3

Comparison of the Pearson correlation between vk and sk for our model and sLDA

K 10203040
sPLSA0.98370.98500.96320.9182
sLDA0.95300.84270.75500.4614

sPLSA is designed for topic discovery and latent response inference. This comes at the expense of its prediction performance. Theoretically, we can use sPLSA in a semi-supervised setting where we mix both labeled and unlabeled data, and then try to predict the labels for the unlabeled data. In such a scenario, we update 𝐯 using the labeled data, β using both the labeled and unlabeled data, and 𝚯 using both the labeled and unlabeled data. However, for the unlabeled data, we update 𝚯 by setting λ=0, since we do not have a response value. Once we train our model, we linearly combine the θ𝐝 values of the unlabeled data with the 𝐯 values to predict the labels. Figure 5 shows the RMSE values of our model’s predictions versus the RMSE values of the sLDA predictions. Clearly, we can see that the RMSE values of our model are significantly worse than the RMSE values of sLDA. The reason why our model performs weakly is because it uses θ𝐝 values to do the prediction. The θ𝐝 values are the average estimate for the topic distributions of the words in each document. In the case of sLDA, the realized topic distributions of the words, Zd, is used.

4.4.2Efficiency

We run sPLSA and sLDA on a MacPro laptop with a 2 GHz processor and 16 GB RAM on the training dataset for various values of K. Figure 6 compares the training time of sPLSA with sLDA for various values of K. As we can see, the training time of sPLSA was at least 6 times faster than sLDA. This is the case because the EM algorithm used by sPLSA converges much faster than the Gibbs sampling used by the sLDA implementation. This is despite the fact that our implementation is a single-threaded Java program not optimized for efficiency while the core of sLDA is efficiently implemented in C.

The reason why Gibbs sampling converges a lot slower than the EM algorithm is because the topics tend to depend on one another. This prolongs the burn-in period for the Gibbs sampling process where a stationary distribution has not been achieved. A stationary distribution needs to be achieved for the actual sampling to take place. During the burn-in period, the Gibbs sampling process can diverge at times. On the other hand, EM does not have the equivalent of a burn-in period and every iteration of the algorithm is guaranteed to monotonically improve the convergence of the likelihood.

4.4.3Impact of η

We trained the model with K=20 and λ=0.5 for each η{10-3,10-2,10-1,1,10,102,103}. We then tested each model with the validation dataset, and obtained the results shown in Table 4. Overall, we can see from the table η=1 yields the best results.

Table 4

The perplexity and Pearson correlation values on the validation dataset for different values of η

η 0.0010.010.11101001000
Perplexity2072.62307.32062.42102.32056.72151.22053.5
Correlation0.335-0.3020.9820.9870.9510.9660.976

Figure 5.

The prediction RMSE values of sPLSA and sLDA at various values of K.

The prediction RMSE values of sPLSA and sLDA at various values of K.

Figure 6.

Comparison of the training time of sPLSA and sLDA for various values of K.

Comparison of the training time of sPLSA and sLDA for various values of K.

4.4.4Impact of λ

We trained the model for each combination of K{10,20,30,40} and λ from 0 to 1 in increments of 0.1. We then tested the model using the test dataset. Figure 7 shows the perplexity values for the various combinations of K and λ.

Figure 7.

The perplexity for various combinations of K and λ.

The perplexity for various combinations of K and λ.

From the figure, we can generally see that as λ increases the perplexity increases as well. For smaller K values this increase is noisy, but for larger K values it gets smoother. The increase in perplexity accelerates as λ approaches 1. We also notice that as K gets larger the overall perplexity gets lower.

Figure 8 shows the values of the Pearson correlation between vk and sk for the various combinations of K and λ.

Figure 8.

The Pearson correlation between vk and sk for various combinations of K and λ.

The Pearson correlation between vk and sk for various combinations of K and λ.

From the figure, we can generally see that the correlation increases steeply from λ=0 to approximately λ=0.3. It then decelerates rapidly and levels off to within a noisy range. We can conclude from Figs 7 and 8 that for a fixed K improving the perplexity by increasing λ generally deteriorates the correlation and vice-versa. However, there is a range of λ values between 0.4 and 0.5 where the perplexity is not that far from the lowest perplexity and the correlation is not much different from the maximum correlation. Our ideal λ is therefore in the range of [0.4, 0.5] for our dataset.

4.4.5Sample topics

Tables 5, 6, and 7 show the top words for the topics generated by PLSA, sLDA, and sPLSA. In general, we can see that very similar topics are generated by all three models. For example, topic 7 for PLSA, topic 2 for sLDA, and topic 6 for sPLSA are about education. This illustrates that the perplexity trade-off we did in selecting λ=0.5 did not adversely affect the quality of the topics generated by sPLSA.

Table 5

Top 5 words for the topics of PLSA when K=10

1TransnationalAlienFinancesBoardPersian
2FisaBuddhistOutdoorLoadedHealing
3WashoeProhibitedEnemyLancasterMammography
4HealingPlainIndispensableCardsChiefs
5DefendingMilestoneFisaForbsForced
6PlainHousesInclusiveIndispensablePropene
7EducationEducatingLobbyingSchofieldFundraising
8SecuresIntellDirectlyForegoneRescind
9FundraisingExpenseAppropriationsFisaRelocates
10DefendingFisaFundraisingMilestonePlain

Table 6

Top 5 words for the topics of sLDA when K=10

1ForegoneInternallySecuritizationNationallyCountries
2EducatingEducationLobbyistsScholarsChildless
3FundraisingBlvdSubsystemAdministratorEntitles
4LanceWastewaterEnemyConservancyProhibition
5DefendingMilfordForbsForcedArmstrong
6AuthoritiesConstructingTraineeshipsSubsystemSystematically
7PlainsHealingPayingCardsInclusive
8PersistentCommissaryCourseworkFinancesAttitudes
9FundraisingExpenseRelocationsTransplantationAppropriation
10FisaBuddhistFundraisingResellerSecuritization

Table 7

Top 5 words for the topics of sPLSA when K=10

1FundraisingExpenseFisaAlienSecures
2HealingHousesFundraisingSecuresPayers
3FinancesCompanionshipCommissaryBankLobbying
4TransnationalProhibitedEnemyWashoeSynthetic
5PersianFontLoadedAgriculturalEligibility
6EducatingHealingLobbyingEducationFisa
7PlainPropeneLancasterMammographyTarp
8FundraisingFisaBuddhistExpenseAppropriations
9DefendingMilestoneFisaForcedForbs
10FisaHealingPlainSecuresTransnational

4.5Case study

For each topic listed in Table 2 where K=40, we sampled the bill which has the highest probability for the topic. We summarize the bills and analyze their connectedness to their corresponding topics in Tables 8, 9, 10, 11, and 12. As we can see from the tables, the controversy score of the bills closely aligns with the controversy level of the topics. In addition, the themes of the topics we specified in the beginning of Section 4.4 partially or fully match the theme of the bills with the exception of the bill for the second least controversial topic. This is primarily because the theme of the second least controversial topic is hard to determine based on the top words of the topic.

Table 8

Sample bill for the most controversial topic

Bill IDH.R. 609
TitleCollege Access and Opportunity Act.
Year2006
Yes Votes221
No Votes199
Controversy Score0.95
Topic Probability0.52
DescriptionThis bill is about higher education, and amends the Higher Education Act of 1965.
AnalysisThe controversy score is on the high-end and the theme of the bill, higher education, matches that of the topic.

Table 9

Sample bill for the second most controversial topic

Bill IDH.R. 2491
TitleBudget Reconciliation Act of 1995
Year1995
Yes Votes235
No Votes192
Controversy Score0.90
Topic Probability0.50
DescriptionThis bill is about the federal budget for 1996.
AnalysisThe controversy score is close to the high-end, and the theme of the bill, funding, matches that of the topic.

Table 10

Sample bill for the most moderately controversial topic

Bill IDH.R. 2
TitleStudent Results Act of 1999
Year1999
Yes Votes358
No Votes67
Controversy Score0.31
Topic Probability0.91
DescriptionThis bill is about child education.
AnalysisThe controversy score is in the middle range, and the theme of the bill, child education, matches that of the topic.

Table 11

Sample bill for the second least controversial topic

Bill IDS. RES. 501
TitleA resolution honoring the sacrifice of the members of the United States Armed Forces who have been killed in Iraq and Afghanistan.
Year2008
Yes Votes95
No Votes0
Controversy Score0.00
Topic Probability0.78
DescriptionAs the title indicates this bill is a resolution honoring servicemen killed in combat.
AnalysisThe controversy score is the lowest possible. However, it is hard to align the theme of the bill with that of the topic.

Table 12

Sample bill for the least controversial topic

Bill IDH.R. 2158
TitleDepartments of Veterans Affairs and Housing and Urban Development, and Independent Agencies Appropriations Act, 1998
Year1998
Yes Votes397
No Votes31
Controversy Score0.10
Topic Probability0.34
DescriptionThis bill is about benefits to veterans. Among the benefits is a program account to fund veterans housing benefits.
AnalysisThe controversy score is close to the low end. Partially, the theme of the bill matches that of the topic.

5.Conclusion and future work

In this paper, we introduce sPLSA. We describe sPLSA as an extension of PLSA that is an analog of what sLDA is to LDA. Similar to sLDA, sPLSA processes a response variable associated with the documents to factorize the responses on a per-topic basis. We discuss the advantage sPLSA has over sLDA for doing latent response analysis such as the ranking of the topics by their factorized responses and the execution efficiency of the model. In addition, we discuss the advantage sLDA has over sPLSA for predicting the responses of documents. We experimentally demonstrated sPLSA on a real world problem by doing a latent controversy analysis of topics inferred from the bills of the United States Congress.

This work is an initial step towards a promising research direction. The presented model assumes the response comes from a Gaussian linear model. This assumption can be relaxed by extending the distribution of the response to a generalized linear model (GLM) [70], which allows for response variables that have error distribution models other than a Gaussian distribution. In future work, we plan to extend sPLSA to other types of response variables including the multinomial, the Poisson, the gamma, Weibull, inverse Gaussian, and so on. This will allow us to apply sPLSA to do latent topic analysis on a more diverse set of problems. Last but not the least, we will explore to combine the proposed model with neural networks by leveraging their nonlinearity modeling capability and extend the work to the realm of neural topic models [71].

References

[1] 

T. Hofmann, Probabilistic latent semantic analysis, in: Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, Morgan Kaufmann Publishers Inc., (1999) , pp. 289–296.

[2] 

T. Hofmann, Probabilistic latent semantic indexing, in: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, ACM, (1999) , pp. 50–57.

[3] 

T. Hofmann, Latent semantic models for collaborative filtering, ACM Transactions on Information Systems (TOIS) 22: (1) ((2004) ), 89–115.

[4] 

J. Sivic, B.C. Russell, A.A. Efros, A. Zisserman and W.T. Freeman, Discovering object categories in image collections, in: Proceedings of IEEE International Conference on Computer Vision, (2005) , pp. 134–141.

[5] 

M. Hoffman, D. Blei and P.R. Cook, Finding latent sources in recorded music with a shift-invariant HDP, in: Proceedings of the conference on digital audio effects, (2009) , pp. 121–128.

[6] 

S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer and R. Harshman, Indexing by latent semantic analysis, Journal of the American Society for Information Science 41: (6) ((1990) ), 391.

[7] 

D.M. Blei, A.Y. Ng and M.I. Jordan, Latent dirichlet allocation, The Journal of Machine Learning Research 3: ((2003) ), 993–1022.

[8] 

M. Girolami and A. Kabán, On an equivalence between PLSI and LDA, in: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, ACM, (2003) , pp. 433–434.

[9] 

T.L. Griffiths and M. Steyvers, Finding scientific topics, Proceedings of the National Academy of Sciences 101: (Suppl 1) ((2004) ), 5228–5235.

[10] 

V.-A. Nguyen, J.L. Boyd-Graber and P. Resnik, Sometimes Average is Best: The Importance of Averaging for Prediction using MCMC Inference in Topic Modeling., in: EMNLP, (2014) , pp. 1752–1757.

[11] 

Y. Lu, Q. Mei and C. Zhai, Investigating task performance of probabilistic topic models: an empirical study of PLSA and LDA, Information Retrieval 14: (2) ((2011) ), 178–203.

[12] 

J.D. Mcauliffe and D.M. Blei, Supervised topic models, in: Advances in neural information processing systems, (2008) , pp. 121–128.

[13] 

T. Hofmann, The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data, in: IJCAI, Vol. 99, (1999) , pp. 682–687.

[14] 

T. Hofmann, J. Puzicha and M.I. Jordan, Learning from dyadic data, Advances in neural information processing systems ((1998) ), 466–472.

[15] 

T. Hofmann and J. Puzicha, Unsupervised Learning from Dyadic Data, Technical Report ((1998) ), 1–32.

[16] 

C. Zhai, A. Velivelli and B. Yu, A cross-collection mixture model for comparative text mining, in: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, (2004) , pp. 743–748.

[17] 

Q. Mei and C. Zhai, A mixture model for contextual text mining, in: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, (2006) , pp. 649–655.

[18] 

Q. Mei, D. Cai, D. Zhang and C. Zhai, Topic modeling with network regularization, in: Proceedings of the 17th international conference on World Wide Web, ACM, (2008) , pp. 101–110.

[19] 

M. Rosen-Zvi, T. Griffiths, M. Steyvers and P. Smyth, The author-topic model for authors and documents, in: Proceedings of the 20th conference on Uncertainty in artificial intelligence, AUAI Press, (2004) , pp. 487–494.

[20] 

T. Iwata, T. Hirao and N. Ueda, Probabilistic latent variable models for unsupervised many-to-many object matching, Information Processing & Management 52: (4) ((2016) ), 682–697.

[21] 

I. Vulić, W. De Smet, J. Tang and M.-F. Moens, Probabilistic topic modeling in multilingual settings: An overview of its methodology and applications, Information Processing & Management 51: (1) ((2015) ), 111–147.

[22] 

D.M. Blei, Probabilistic topic models, Communications of the ACM 55: (4) ((2012) ), 77–84.

[23] 

C. Wang, D. Blei and F.-F. Li, Simultaneous image classification and annotation, in: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, (2009) , pp. 1903–1910.

[24] 

S. Lacoste-Julien, F. Sha and M.I. Jordan, DiscLDA: Discriminative learning for dimensionality reduction and classification, in: Advances in neural information processing systems, (2009) , pp. 897–904.

[25] 

J. Zhu, A. Ahmed and E.P. Xing, MedLDA: maximum margin supervised topic models for regression and classification, in: Proceedings of the 26th annual international conference on machine learning, ACM, (2009) , pp. 1257–1264.

[26] 

S. Jameel, W. Lam and L. Bing, Supervised topic models with word order structure for document classification and retrieval learning, Information Retrieval Journal 18: (4) ((2015) ), 283–330.

[27] 

D. Ramage, D. Hall, R. Nallapati and C.D. Manning, Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora, in: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, Association for Computational Linguistics, (2009) , pp. 248–256.

[28] 

M. Kar, S. Nunes and C. Ribeiro, Summarization of changes in dynamic text collections using Latent Dirichlet Allocation model, Information Processing & Management 51: (6) ((2015) ), 809–833.

[29] 

S. Park, W. Lee and I.-C. Moon, Associative topic models with numerical time series, Information Processing & Management 51: (5) ((2015) ), 737–755.

[30] 

K. Seshadri, S.M. Shalinie and C. Kollengode, Design and evaluation of a parallel algorithm for inferring topic hierarchies, Information Processing & Management 51: (5) ((2015) ), 662–676.

[31] 

F. Colace, M. De Santo, L. Greco and P. Napoletano, Weighted word pairs for query expansion, Information Processing & Management 51: (1) ((2015) ), 179–193.

[32] 

E.B. Sudderth, A. Torralba, W.T. Freeman and A.S. Willsky, Learning hierarchical models of scenes, objects, and parts, in: Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, Vol. 2, IEEE, (2005) , pp. 1331–1338.

[33] 

L.-J. Li, R. Socher and L. Fei-Fei, Towards total scene understanding: Classification, annotation and segmentation in an automatic framework, in: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, (2009) , pp. 2036–2043.

[34] 

D. Mimno and A. McCallum, Topic models conditioned on arbitrary features with dirichlet-multinomial regression, International Conference on Uncertainty in Artificial Intelligence (UAI) ((2008) ).

[35] 

K. Thambiratnam and F. Seide, Learning spoken document similarity and recommendation using supervised probabilistic latent semantic analysis, in: INTERSPEECH, (2007) , pp. 334–337.

[36] 

R. Fergus, L. Fei-Fei, P. Perona and A. Zisserman, Learning Object Categories from Google’s Image Search, in: Proceedings of IEEE International Conference on Computer Vision, (2005) , pp. 234–241.

[37] 

T. Wang and C. Liu, Human Action Recognition Using Supervised pLSA, International Journal of Signal Processing, Image Processing and Pattern Recognition 6: (4) ((2013) ), 403–414.

[38] 

D. Aliyanto, R. Sarno and B.S. Rintyarna, Supervised probabilistic latent semantic analysis (sPLSA) for estimating technology readiness level, in: 2017 11th International Conference on Information & Communication Technology and System (ICTS), IEEE, (2017) , pp. 79–84.

[39] 

R. Salakhutdinov and G. Hinton, Deep boltzmann machines, in: Artificial intelligence and statistics, PMLR, (2009) , pp. 448–455.

[40] 

H. Larochelle and S. Lauly, A neural autoregressive topic model, Advances in Neural Information Processing Systems 25: ((2012) ), 2708–2716.

[41] 

D.P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114 ((2013) ).

[42] 

Z. Cao, S. Li, Y. Liu, W. Li and H. Ji, A novel neural topic model and its supervised extension, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 29, (2015) .

[43] 

C.E. Moody, Mixing dirichlet topic models and word embeddings to make lda2vec, arXiv preprint arXiv:1605.02019 ((2016) ).

[44] 

A.B. Dieng, C. Wang, J. Gao and J. Paisley, Topicrnn: A recurrent neural network with long-range semantic dependency, arXiv preprint arXiv:1611.01702 ((2016) ).

[45] 

P. Gupta, Y. Chaudhary, F. Buettner and H. Schütze, texttovec: Deep contextualized neural autoregressive models of language with distributed compositional prior, International Conference on Learning Representation ((2019) ).

[46] 

R. Murakami and B. Chakraborty, Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts, Sensors 22: (3) ((2022) ), 852.

[47] 

M. Grootendorst, BERTopic: Neural topic modeling with a class-based TF-IDF procedure, arXiv preprint arXiv:2203.05794 ((2022) ).

[48] 

H. Zhao, D. Phung, V. Huynh, Y. Jin, L. Du and W. Buntine, Topic Modelling Meets Deep Neural Networks: A Survey, in: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), (2021) .

[49] 

A. Abdelrazek, Y. Eid, E. Gawish, W. Medhat and A. Hassan, Topic modeling algorithms and applications: A survey, Information Systems ((2022) ), 102131.

[50] 

K.K. Ladha, A spatial model of legislative voting with perceptual error, Public Choice 68: (1–3) ((1991) ), 151–174.

[51] 

J. Londregan, Estimating legislators’ preferred points, Political Analysis 8: (1) ((1999) ), 35–56.

[52] 

G.W. Cox and K.T. Poole, On measuring partisanship in roll-call voting: The US House of Representatives, 1877-1999, American Journal of Political Science ((2002) ), 477–489.

[53] 

J. Clinton, S. Jackman and D. Rivers, The statistical analysis of roll call data, American Political Science Review 98: (2) ((2004) ), 355–370.

[54] 

M. Thomas, B. Pang and L. Lee, Get out the vote: Determining support or opposition from Congressional floor-debate transcripts, in: Proceedings of the 2006 conference on empirical methods in natural language processing, Association for Computational Linguistics, (2006) , pp. 327–335.

[55] 

S. Gerrish and D.M. Blei, Predicting legislative roll calls from text, in: Proceedings of the 28th international conference on machine learning (icml-11), (2011) , pp. 489–496.

[56] 

S. Gerrish and D.M. Blei, How they vote: Issue-adjusted models of legislative behavior, in: Advances in Neural Information Processing Systems, (2012) , pp. 2753–2761.

[57] 

Y. Fang, L. Si, N. Somasundaram and Z. Yu, Mining contrastive opinions on political texts using cross-perspective topic model, in: Proceedings of the fifth ACM international conference on Web search and data mining, ACM, (2012) , pp. 63–72.

[58] 

Y. Gu, Y. Sun, N. Jiang, B. Wang and T. Chen, Topic-factorized ideal point estimation model for legislative voting network, in: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, (2014) , pp. 183–192.

[59] 

C. Chen, F. Ibekwe-SanJuan, E. SanJuan and C. Weaver, Visual analysis of conflicting opinions, in: Visual Analytics Science And Technology, 2006 IEEE Symposium On, IEEE, (2006) , pp. 59–66.

[60] 

M. Tsytsarau, T. Palpanas and K. Denecke, Scalable discovery of contradictions on the web, in: Proceedings of the 19th international conference on World wide web, ACM, (2010) , pp. 1195–1196.

[61] 

W.-H. Lin, T. Wilson, J. Wiebe and A. Hauptmann, Which side are you on?: identifying perspectives at the document and sentence levels, in: Proceedings of the Tenth Conference on Computational Natural Language Learning, Association for Computational Linguistics, (2006) , pp. 109–116.

[62] 

S. Somasundaran and J. Wiebe, Recognizing stances in online debates, in: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, Association for Computational Linguistics, (2009) , pp. 226–234.

[63] 

J. Ashford, L. Turner, R. Whitaker, A. Preece, D. Felmlee and D. Towsley, Understanding the signature of controversial Wikipedia articles through motifs in editor revision networks, in: Companion Proceedings of the 2019 World Wide Web Conference, (2019) , pp. 1180–1187.

[64] 

K. Kanclerz, A. Figas, M. Gruza, T. Kajdanowicz, J. Kocoń, D. Puchalska and P. Kazienko, Controversy and conformity: from generalized to personalized aggressiveness detection, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), (2021) , pp. 5915–5926.

[65] 

D.A. Morris-O’Connor, A. Strotmann and D. Zhao, The colonization of Wikipedia: evidence from characteristic editing behaviors of warring camps, Journal of Documentation ((2022) ).

[66] 

S. Benslimane, J. Azé, S. Bringay, M. Servajean and C. Mollevi, Controversy Detection: a Text and Graph Neural Network Based Approach, in: International Conference on Web Information Systems Engineering, Springer, (2021) , pp. 339–354.

[67] 

E.E. Küçük, S. Takır and D. Küçük, Controversy detection on health-related tweets, in: Proceedings of the 14th International Symposium on Health Informatics and Bioinformatics, (2021) , p. 60.

[68] 

K. Garimella, G.D.F. Morales, A. Gionis and M. Mathioudakis, Quantifying controversy on social media, ACM Transactions on Social Computing 1: (1) ((2018) ), 1–27.

[69] 

A.E. Hoerl and R.W. Kennard, Ridge regression: Biased estimation for nonorthogonal problems, Technometrics 12: (1) ((1970) ), 55–67.

[70] 

P. McCullagh and J.A. Nelder, Generalized linear models, Vol. 37, CRC press, (1989) .

[71] 

H. Zhao, D. Phung, V. Huynh, Y. Jin, L. Du and W. Buntine, Topic modelling meets deep neural networks: A survey, in: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), (2021) , pp. 4713–4720.