W06-3108 |
defines an event for the maximum
|
entropy training
|
. An exception are the oneto-many
|
W06-1646 |
prohibitively expensive for maximum
|
entropy training
|
. Analysis of the models learned
|
W11-3214 |
Pairs The features for maximum
|
entropy training
|
are extracted from aligned names
|
P08-1011 |
Goodman , 1998 ) . The Maximum
|
Entropy training
|
toolkit from ( Zhang , 2006 )
|
W06-1646 |
feature selection and the maximum
|
entropy training
|
procedure . 8 Acknowledgements
|
E03-1007 |
baseline system and from the maximum
|
entropy training
|
on the transformed corpus . For
|
W11-1007 |
weights obtained through the maximum
|
entropy training
|
on the parallel data . Finally
|
W10-4133 |
some features for the maximum
|
entropy training
|
. However , it effectively improves
|
N04-1039 |
2001 . Classes for fast maximum
|
entropy training
|
. In ICASSP 2001 . Joshua Goodman
|
W11-2708 |
) ( 4 ) Given that the maximum
|
entropy training
|
procedure attempts to minimize
|
P02-1038 |
five-gram GIS algorithm for maximum
|
entropy training
|
of alignment templates . language
|
W06-3108 |
formance . Here , we let the maximum
|
entropy training
|
decide which features are important
|
W07-1516 |
Our implementation of maximum
|
entropy training
|
employs a convex optimization
|
D12-1095 |
subtree ranker method using Maximum
|
Entropy training
|
( 's ubtree ranking by Max -
|
W09-2902 |
Again , supervised denotes Maximum
|
Entropy training
|
and Unsupervised is our unsupervised
|
W10-1410 |
Chrupała et al. ( 2008 ) use Maximum
|
Entropy training
|
to learn PM and PL , here we
|
E09-1033 |
more difficult . Their Maximum
|
Entropy training
|
is more appropriate for their
|
P07-1091 |
alignment tool , and a Maximum
|
Entropy training
|
tool . We use the Stanford parser
|
P02-1038 |
translations used for the maximum
|
entropy training
|
. • WER ( word error rate
|
P02-1038 |
problem , we define for maximum
|
entropy training
|
each sentence as reference translation
|