C04-1059 statistical machine translation decoder . It is the optimal hypothesis
D08-1010 Therefore , our model allows the decoder to perform context-dependent
C02-1050 translations performed by the decoders is presented in Figure 6 . <title>
D08-1010 Therefore , during decoding , the decoder should select a correct target-side
C02-1050 should come rst . Hence , the decoder can discriminate a hypothesis
C04-1030 describe one way to obtain such a decoder . It has been pointed out in
D08-1010 examples in Figure 2 . This makes the decoder hardly distinguish the two rules
C04-1103 ( 5 ) . 3.6 Decoding Issue The decoder searches for the most probabilistic
C96-1082 usually applied to an acoustic decoder in isolation . We counted only
C04-1103 same n-gram TM and using the same decoder . 3.4 DOM : n-gram TM vs. NCM
C02-2003 supplies the acoustic models and the decoder . The right-hand side of the
C73-1014 State machines as encoders and decoders , the tests for IL and ILF can
D08-1010 Thus the MERS models can help the decoder to perform a context-dependent
C04-1168 translation system is a graphbased decoder ( Ue ng et al. , 2002 ) . The
C04-1030 have to modify the beam search decoder such that it can not produce
C04-1103 dictionary lookup processing with the decoder , which is referred as Case 2
C04-1168 Watanabe and Sumita , 2003 ) . The decoder used the IBM Model 4 with a trigram
C96-1075 of the strengths of the speech decoder . At the current time , Phoenix
C04-1060 the IBM models , and in practice decoders search through the space of hypothesis
C04-1103 Viterbi algorithm , we use stack decoder ( Schwartz et al. , 1990 ) to
hide detail