P13-2073 MERT ( Och , 2003 ) for decoding weight optimization . For Persian-English translation
P06-1090 rate training was not used for weight optimization . The thresholds used in the
N06-1004 challenge , particularly during the weight optimization step . In experiments on other
W05-0821 trained using the minimum error weight optimization procedure provided by Pharaoh
P06-1001 preprocessing was varied . Decoding weight optimization was done using a set of 200 sentences
N06-2013 schemes and techniques . Decoding weight optimization was done on 200 sentences from
P14-1129 rule extraction . For MT feature weight optimization , we use iterative k-best optimization
P07-1024 sentences provided as input to the weight optimization procedure . While the average
D12-1088 applied after the translation model weight optimization with MERT . We gener ate multiple
W09-0432 required to perform a reliable weight optimization . Our models were trained on
H93-1018 also use it for parameter and weight optimization . The N-best Paradigm is a type
P12-1001 optimization . 1 Introduction Weight optimization is an important step in building
W10-1748 longer expected decoding output . 3 Weight Optimization Standard search algorithms may
A94-1007 rules improve the model . • Weight optimization : The weights for each feature
W10-1748 confusion networks . Decoding weight optimization using direct lattice 1-best BLEU
W05-0822 with our system in the areas of weight optimization and number and date rules . It
W09-3519 parallel corpora . Mert is used for weight optimization . It includes several improvements
W05-0822 could have been achieved by better weight optimization , this result clearly underscores
W08-0510 phrase ta - bles , scripts for weight optimization using MERT ( Och 2003 ) , and
W10-0715 especially given to extend the weight optimization procedure in order to preserve
hide detail