D09-1026 appropriately normalizing the topic assignments z . It should now be apparent
D10-1005 probability distribution Bd over topic assignments . Each Bd is a vector of length
A92-1025 complete list of industry and topic assignments currently in use to categorize
D09-1146 each collection by considering topic assignments of documents within each collection
D08-1035 , and write zt to indicate the topic assignment for sentence t . The observation
A88-1002 consistent in the way they made their topic assignments . The news stories themselves
D09-1163 words and . denotes the vector of topic assignment except the considered word at
D10-1005 least squares regression using the topic assignments zd to predict yd . Prediction
D10-1025 constraints on the expectations of topic assignments to two corresponding documents
D09-1092 for each language l , a latent topic assignment is drawn for each token in that
A92-1025 most obvious differences between topic assignment and industry assignment are :
D10-1025 term , which tries to make the topic assignments of corresponding documents close
D10-1005 response variable depends on the topic assignments of a document , the conditional
A92-1025 industry categories . Performance on topic assignment was generally much higher using
A92-1025 system used linguistic methods for topic assignment and statistical methods for industries
D09-1092 expect that simple analysis of topic assignments for sequential words would yield
D09-1092 training tuples and inferring latent topic assignments for test documents . These tasks
D10-1025 document is close to the best topic assignment of the foreign document . This
D10-1025 instead of making sure that the best topic assignment for the English document is close
A92-1025 with coverage of over 90 % on topic assignment and performance better than human
hide detail