P08-2020 especially longer sentences into translation model training . We also extended our decoder
P08-2020 test data . 1.3 Training Data In translation model training we used the Chinese - English
N09-1049 upper levels of the CYK grid . For translation model training , we use all available data for
N09-1049 αs E ( { V } U T ) + For translation model training , we use all allowed parallel
D15-1163 of PPDB for statistical machine translation model training . Our framework has three stages
D12-1078 bilingual datasets as used in translation model training . The MT performances on IWSLT
D09-1107 used for biphrase extraction and translation model training . Decoder feature weights were
H01-1035 tended to focus on their use in translation model training for MT rather than on monolingual
A00-1004 of our parallel text mining and translation model training . 3.1 The Corpus Using the above
N07-1008 the parallel corpus used in the translation model training . The baseline decoder is a phrase-based
P05-1048 using the bilexicon learned during translation model training . For each target word , we consider
A00-1004 will discuss some problems in translation model training and show the preliminary CUR
E14-1061 . Splitting compounds prior to translation model training enables better access to the
A00-1004 used for parallel text mining , translation model training , and some results we obtained
A00-1004 evaluation . 3 Generated Corpus and Translation Model Training In this section , we describe
D12-1078 system applies pruning during translation model training and decoding , and a lot of translation
D08-1090 Arabic-English text . Limiting the translation model training in this way simulates the problem
E14-4034 English task . Performing consistent translation model training improves the translation d1 :
A00-1004 algorithm we adopted , some issues in translation model training using the generated parallel
P00-1067 for further improvement . 1.3 Translation Model Training Chinese sentences must be segmented
hide detail