P10-2056 binary ) feature functions . During ME training , the optimal weights Ai corresponding
W06-2601 leaving-one-out method into the standard ME training algorithm . Experimental results
P10-2002 challenge might exist when running the ME training toolkit over a big size of training
W14-1710 maximum-likelihood parameter estimate in the ME training . Therefore a feature selection
P10-2002 3.2 Context-based features for ME training ME approach has the merit of
J13-2001 for log-linear combination or ME training without derivation . In contrast
P01-1027 the 348 models obtained with the ME training . For an hypothesis sentenceand
W03-1020 parameters in the conditional ME training . Specifically , we use array
W04-3248 穆斯塔菲兹拉赫曼 " correctly . Since in ME training we use iterative bootstrapping
P10-2002 Petrov and Klein , 2007 ) . The ME training toolkit , developed by ( Zhang
P06-2028 of words may help , and use the ME training process to weed out the irrelevant
P02-1021 well as the cutoff used during ME training . It will also be necessary to
N06-1026 To express tree structure for ME training , we extract path information
N04-1034 each English sentence we keep as ME training instances its Arabic equivalent
W13-3610 Training data refinement . • ME training . The test step includes three
N04-1034 Depending on the domain of the ME training corpus and the size of the filter
S01-1032 The set of features defined for ME training is described below and it is
W06-2601 log-probability distributions . ME training with the so-obtained real-valued
hide detail