D08-1037 new weight vector obtained by discriminative training . 4 . Inferring Informative Representation
C04-1041 reducing the space required for discriminative training . It also dramatically increases
D08-1037 The weight vector learned by the discriminative training is denoted WD . Given the new
D10-1025 text categoriza - tion . We use discriminative training to create a projection of documents
D08-1047 instances that are effective for discriminative training . To address the first issue
D10-1025 the projections by the use of discriminative training : we minimize the difference
D10-1025 from cleanly parallel data , but discriminative training can bring additional error reduction
D08-1047 these challenges by exploring the discriminative training of candidate genera - tors .
D09-1074 this problem and will make the discriminative training more targeted . Acknowledgments
D08-1065 possibly employed when designing discriminative training approaches for machine translation
D08-1037 this highlights the importance of discriminative training . Both model 4 and model 5 use
D09-1111 comparison of two feature paradigms for discriminative training : indicators and hybrid generative
D08-1023 we demonstrate the power of the discriminative training paradigm by extracting structured
D10-1014 be interesting to investigate discriminative training approaches to learn latent categories
D09-1111 as SK07 . Despite its lack of discriminative training , word-level unigrams allow the
D09-1043 related to the present work is discriminative training in syntax-based MT ( Turian et
D09-1043 the model , which prior work on discriminative training with log linear models for HPSG
D08-1037 discriminatively , as described next . 2.3 Discriminative Training Using the IFW operator , we generate
D09-1105 paper we will stick to purely discriminative training and decoding methods that simply
D09-1111 insights into the advantages of discriminative training . 2.3 Discriminative character
hide detail