N13-1002 |
trained on a development set using
|
MER training
|
to maximize the BLEU score of
|
N07-1063 |
calculate P ( D ) are trained using
|
MER training
|
( Och , 2003 ) on development
|
W05-0836 |
also report MAP scores using the
|
MER training
|
described above to determine
|
D09-1008 |
However , due to the limitation of
|
MER training
|
, only part of the feature space
|
W05-0836 |
each decision rule . The MAP -
|
MER training
|
was performed to evaluate the
|
P09-1103 |
tree sequence pairs . For the
|
MER training
|
, we modify Koehn 's version
|
P07-1059 |
Och , 2003 ) . Training data for
|
MER training
|
were taken from multiple manual
|
P08-1115 |
the interpolation weights using
|
MER training
|
( Och , 2003 ) . Evaluation was
|
P09-1020 |
- n word alignments . For the
|
MER training
|
( Och , 2003 ) , Koehn 's MER
|
D10-1043 |
m-to-n word alignments . For the
|
MER training
|
( Och , 2003 ) , Koehn 's MER
|
N07-1063 |
either approach is effective for
|
MER training
|
. 6.2 Results Figure 3 and Figure
|
W05-0836 |
highest Score , just as in the
|
MER training
|
process . The exact method of
|
P08-1064 |
m-to-n word alignments . For the
|
MER training
|
( Och , 2003 ) , we modified
|
D09-1073 |
Kenser and Ney , 1995 ) . For the
|
MER training
|
( Och , 2003 ) , we modify Koehn
|
D08-1060 |
weights optimized in standard
|
MER training
|
. The combinatorial effects of
|
N07-1063 |
which have been trained using
|
MER training
|
. The parameters used for these
|
N07-1063 |
varied pruning parameters run
|
MER training
|
and still generate parameters
|
W05-0836 |
algorithm for MAP . Note that the
|
MER training
|
approach can not be performed
|
D11-1044 |
understand what our model learns during
|
MER training
|
, we computed the feature vectors
|
P12-2062 |
straightforward . At each iteration of
|
MER training
|
, we run the parser and decoder
|