H94-1079 parameters M new using 1 iteration of Viterbi training . 6 . Update the model by smoothing
E09-1020 number of co-occurrences . A greedy Viterbi training is then applied to improve this
J03-1002 simplest method is to perform Viterbi training using only the best alignment
C96-1051 quantization and beam-search driven Viterbi training and recognition . The ls ` adora
J04-1004 Su ( 1997 ) use an unsupervised Viterbi training process to select potential unknown
J12-3003 ( 2010c ) for the hardness of Viterbi training and maximizing log-likelihood
D13-1204 single ( unlexicalized ) step of Viterbi training : The idea here is to focus on
J93-2003 Viterbi alignment , we call this Viterbi training . It is easy to see that Viterbi
J93-2003 lies in better modeling . 6.2 Viterbi Training As we progress from Model 1 to
D13-1204 the " baby steps " strategy with Viterbi training ( Spitkovsky et al. , 2010 ,
H93-1020 training is similar to IIMM " Viterbi training " , in which training data is
J93-2003 then a similarly reinterpreted Viterbi training algorithm still converges . We
D11-1117 settings that are least favorable for Viterbi training : adhoc and sweet on . Although
D11-1117 average , compared to standard Viterbi training ; A43 is , in fact , 20 % faster
D11-1117 - erage , compared to standard Viterbi training ; A13 is only 30 % slower than
D11-1117 set-up may be disadvantageous for Viterbi training , since half the settings use
D13-1204 also as a single unlexicalized Viterbi training step , but now with proposed
H91-1052 Since these both were essentially Viterbi training procedures ( estimated from only
D13-1204 ( single steps of lexicalized Viterbi training on clean , simple data ) , ahead
D11-1117 average , compared to standard Viterbi training ; A23 is again 30 % slower than
hide detail