model,3-1-H92-1026,ak OCR accuracy </term> . We describe a <term> generative probabilistic model </term> of <term> natural language </term> ,
other,7-1-H92-1026,ak generative probabilistic model </term> of <term> natural language </term> , which we call <term> HBG </term> ,
model,13-1-H92-1026,ak natural language </term> , which we call <term> HBG </term> , that takes advantage of detailed
other,20-1-H92-1026,ak , that takes advantage of detailed <term> linguistic information </term> to resolve <term> ambiguity </term> .
other,24-1-H92-1026,ak linguistic information </term> to resolve <term> ambiguity </term> . <term> HBG </term> incorporates <term>
model,0-2-H92-1026,ak </term> to resolve <term> ambiguity </term> . <term> HBG </term> incorporates <term> lexical , syntactic
other,2-2-H92-1026,ak </term> . <term> HBG </term> incorporates <term> lexical , syntactic , semantic , and structural information </term> from the <term> parse tree </term> into
other,13-2-H92-1026,ak structural information </term> from the <term> parse tree </term> into the <term> disambiguation process
other,17-2-H92-1026,ak the <term> parse tree </term> into the <term> disambiguation process </term> in a novel way . We use a <term> corpus
lr,3-3-H92-1026,ak process </term> in a novel way . We use a <term> corpus </term> of bracketed <term> sentences </term>
other,6-3-H92-1026,ak use a <term> corpus </term> of bracketed <term> sentences </term> , called a <term> Treebank </term> ,
lr,10-3-H92-1026,ak bracketed <term> sentences </term> , called a <term> Treebank </term> , in combination with <term> decision
tech,15-3-H92-1026,ak Treebank </term> , in combination with <term> decision tree building </term> to tease out the relevant aspects
other,26-3-H92-1026,ak tease out the relevant aspects of a <term> parse tree </term> that will determine the correct <term>
other,33-3-H92-1026,ak </term> that will determine the correct <term> parse </term> of a <term> sentence </term> . This stands
other,36-3-H92-1026,ak the correct <term> parse </term> of a <term> sentence </term> . This stands in contrast to the
other,15-4-H92-1026,ak further grammar tailoring via the usual <term> linguistic introspection </term> in the hope of generating the correct
other,24-4-H92-1026,ak the hope of generating the correct <term> parse </term> . In head-to-head tests against one
model,9-5-H92-1026,ak tests against one of the best existing <term> robust probabilistic parsing models </term> , which we call <term> P-CFG </term>
model,17-5-H92-1026,ak parsing models </term> , which we call <term> P-CFG </term> , the <term> HBG model </term> significantly
hide detail