tech,39-12-J05-1003,ak , <term> speech recognition </term> , <term> machine translation </term> , or <term> natural language generation
other,37-4-J05-1003,ak overlap and without the need to define a <term> derivation </term> or a <term> generative model </term>
other,23-9-J05-1003,ak the <term> feature space </term> in the <term> parsing data </term> . Experiments show significant efficiency
tech,43-12-J05-1003,ak <term> machine translation </term> , or <term> natural language generation </term> . We present a novel method for discovering
other,45-4-J05-1003,ak generative model </term> which takes these <term> features </term> into account . We introduce a new
other,25-7-J05-1003,ak additional 500,000 <term> features </term> over <term> parse trees </term> that were not included in the original
model,2-8-J05-1003,ak original <term> model </term> . The new <term> model </term> achieved 89.75 % <term> F-measure </term>
model,2-3-J05-1003,ak these <term> parses </term> . A second <term> model </term> then attempts to improve upon this
other,14-3-J05-1003,ak <term> ranking </term> , using additional <term> features </term> of the <term> tree </term> as evidence
tech,8-10-J05-1003,ak significant efficiency gains for the new <term> algorithm </term> over the obvious implementation of
model,18-8-J05-1003,ak <term> F-measure error </term> over the <term> baseline model ’s </term> score of 88.2 % . The article also
tech,23-12-J05-1003,ak should be applicable to many other <term> NLP problems </term> which are naturally framed as <term>
tech,13-5-J05-1003,ak reranking task </term> , based on the <term> boosting approach to ranking problems </term> described in Freund et al. ( 1998
other,30-12-J05-1003,ak </term> which are naturally framed as <term> ranking tasks </term> , for example , <term> speech recognition
measure(ment),6-8-J05-1003,ak <term> model </term> achieved 89.75 % <term> F-measure </term> , a 13 % relative decrease in <term>
model,40-4-J05-1003,ak define a <term> derivation </term> or a <term> generative model </term> which takes these <term> features </term>
other,16-9-J05-1003,ak </term> which takes advantage of the <term> sparsity </term> of the <term> feature space </term> in
other,17-3-J05-1003,ak additional <term> features </term> of the <term> tree </term> as evidence . The strength of our
tech,1-2-J05-1003,ak <term> probabilistic parser </term> . The <term> base parser </term> produces a set of <term> candidate
tech,9-9-J05-1003,ak a new <term> algorithm </term> for the <term> boosting approach </term> which takes advantage of the <term>
hide detail