model,3-1-H92-1026,bq <term> accuracy </term> . We describe a <term> generative probabilistic model of natural language </term> , which we call <term> HBG </term> ,
tech,13-1-H92-1026,bq natural language </term> , which we call <term> HBG </term> , that takes advantage of detailed
other,20-1-H92-1026,bq , that takes advantage of detailed <term> linguistic information </term> to resolve <term> ambiguity </term> .
other,24-1-H92-1026,bq linguistic information </term> to resolve <term> ambiguity </term> . <term> HBG </term> incorporates <term>
tech,0-2-H92-1026,bq </term> to resolve <term> ambiguity </term> . <term> HBG </term> incorporates <term> lexical , syntactic
other,2-2-H92-1026,bq </term> . <term> HBG </term> incorporates <term> lexical , syntactic , semantic , and structural information </term> from the <term> parse tree </term> into
other,13-2-H92-1026,bq structural information </term> from the <term> parse tree </term> into the <term> disambiguation process
tech,17-2-H92-1026,bq the <term> parse tree </term> into the <term> disambiguation process </term> in a novel way . We use a <term> corpus
lr,3-3-H92-1026,bq process </term> in a novel way . We use a <term> corpus of bracketed sentences </term> , called a <term> Treebank </term> ,
lr,10-3-H92-1026,bq bracketed sentences </term> , called a <term> Treebank </term> , in combination with <term> decision
tech,15-3-H92-1026,bq Treebank </term> , in combination with <term> decision tree building </term> to tease out the relevant aspects
other,26-3-H92-1026,bq tease out the relevant aspects of a <term> parse tree </term> that will determine the correct <term>
other,33-3-H92-1026,bq </term> that will determine the correct <term> parse </term> of a <term> sentence </term> . This stands
other,36-3-H92-1026,bq the correct <term> parse </term> of a <term> sentence </term> . This stands in contrast to the
other,10-4-H92-1026,bq contrast to the usual approach of further <term> grammar </term> tailoring via the usual <term> linguistic
other,15-4-H92-1026,bq grammar </term> tailoring via the usual <term> linguistic introspection </term> in the hope of generating the correct
other,24-4-H92-1026,bq the hope of generating the correct <term> parse </term> . In <term> head-to-head tests </term>
measure(ment),1-5-H92-1026,bq the correct <term> parse </term> . In <term> head-to-head tests </term> against one of the best existing
tech,10-5-H92-1026,bq against one of the best existing robust <term> probabilistic parsing models </term> , which we call <term> P-CFG </term>
tech,17-5-H92-1026,bq parsing models </term> , which we call <term> P-CFG </term> , the <term> HBG model </term> significantly
hide detail