#5442We describe a new method for the representation of NLP structures withinreranking approaches.
model,5-2-H05-1064,ak
approaches
</term>
. We make use of a
<term>
conditional log-linear model
</term>
, with
<term>
hidden variables
</term>
#5450We make use of aconditional log-linear model, with hidden variables representing the assignment of lexical items to word clusters or word senses.
model,10-2-H05-1064,ak
conditional log-linear model
</term>
, with
<term>
hidden variables
</term>
representing the assignment of
<term>
#5455We make use of a conditional log-linear model, withhidden variables representing the assignment of lexical items to word clusters or word senses.
other,16-2-H05-1064,ak
</term>
representing the assignment of
<term>
lexical items
</term>
to
<term>
word clusters
</term>
or
<term>
#5461We make use of a conditional log-linear model, with hidden variables representing the assignment oflexical items to word clusters or word senses.
model,19-2-H05-1064,ak
assignment of
<term>
lexical items
</term>
to
<term>
word clusters
</term>
or
<term>
word senses
</term>
. The
<term>
#5464We make use of a conditional log-linear model, with hidden variables representing the assignment of lexical items toword clusters or word senses.
other,22-2-H05-1064,ak
</term>
to
<term>
word clusters
</term>
or
<term>
word senses
</term>
. The
<term>
model
</term>
learns to
#5467We make use of a conditional log-linear model, with hidden variables representing the assignment of lexical items to word clusters orword senses.
model,1-3-H05-1064,ak
</term>
or
<term>
word senses
</term>
. The
<term>
model
</term>
learns to automatically make these
#5471Themodel learns to automatically make these assignments based on a discriminative training criterion.
model,11-3-H05-1064,ak
make these assignments based on a
<term>
discriminative training criterion
</term>
.
<term>
Training
</term>
and
<term>
decoding
#5481The model learns to automatically make these assignments based on adiscriminative training criterion.
tech,0-4-H05-1064,ak
discriminative training criterion
</term>
.
<term>
Training
</term>
and
<term>
decoding
</term>
with the
<term>
#5485The model learns to automatically make these assignments based on a discriminative training criterion.Training and decoding with the model requires summing over an exponential number of hidden-variable assignments: the required summations can be computed efficiently and exactly using dynamic programming.
model,5-4-H05-1064,ak
</term>
and
<term>
decoding
</term>
with the
<term>
model
</term>
requires summing over an exponential
#5490Training and decoding with themodel requires summing over an exponential number of hidden-variable assignments: the required summations can be computed efficiently and exactly using dynamic programming.
tech,26-4-H05-1064,ak
computed efficiently and exactly using
<term>
dynamic programming
</term>
. As a case study , we apply the
<term>
#5511Training and decoding with the model requires summing over an exponential number of hidden-variable assignments: the required summations can be computed efficiently and exactly usingdynamic programming.
model,8-5-H05-1064,ak
</term>
. As a case study , we apply the
<term>
model
</term>
to
<term>
parse reranking
</term>
. The
#5522As a case study, we apply themodel to parse reranking.
model,1-6-H05-1064,ak
to
<term>
parse reranking
</term>
. The
<term>
model
</term>
gives an
<term>
F-measure improvement
#5528Themodel gives an F-measure improvement of [?] 1.25% beyond the base parser, and an [?] 0.25% improvement beyond the Collins (2000) reranker.
measure(ment),4-6-H05-1064,ak
</term>
. The
<term>
model
</term>
gives an
<term>
F-measure improvement
</term>
of [ ? ] 1.25 % beyond the
<term>
base
#5531The model gives anF-measure improvement of [?] 1.25% beyond the base parser, and an [?] 0.25% improvement beyond the Collins (2000) reranker.
tech,14-6-H05-1064,ak
improvement
</term>
of [ ? ] 1.25 % beyond the
<term>
base parser
</term>
, and an [ ? ] 0.25 % improvement
#5541The model gives an F-measure improvement of [?] 1.25% beyond thebase parser, and an [?] 0.25% improvement beyond the Collins (2000) reranker.
tool,27-6-H05-1064,ak
? ] 0.25 % improvement beyond the
<term>
Collins ( 2000 ) reranker
</term>
. Although our experiments are focused
#5554The model gives an F-measure improvement of [?] 1.25% beyond the base parser, and an [?] 0.25% improvement beyond theCollins ( 2000 ) reranker.
tech,6-7-H05-1064,ak
Although our experiments are focused on
<term>
parsing
</term>
, the techniques described generalize
#5566Although our experiments are focused onparsing, the techniques described generalize naturally to NLP structures other than parse trees.
other,18-7-H05-1064,ak
naturally to NLP structures other than
<term>
parse trees
</term>
. This paper presents a
<term>
phrase-based
#5578Although our experiments are focused on parsing, the techniques described generalize naturally to NLP structures other thanparse trees.