We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
<term>
HBG
</term>
incorporates
<term>
lexical , syntactic , semantic , and structural information
</term>
from the
<term>
parse tree
</term>
into the
<term>
disambiguation process
</term>
in a novel way .
#23891We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity.HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way.
lr,10-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23925We use a corpus of bracketed sentences, called aTreebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence.
model,3-1-H92-1026,ak
We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
#23868We describe agenerative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity.
other,26-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23941We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of aparse tree that will determine the correct parse of a sentence.
other,36-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23951We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of asentence.
lr,3-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23918We use acorpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence.
other,6-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23921We use a corpus of bracketedsentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence.
model,13-1-H92-1026,ak
We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
#23878We describe a generative probabilistic model of natural language, which we callHBG, that takes advantage of detailed linguistic information to resolve ambiguity.
model,17-5-H92-1026,ak
In head-to-head tests against one of the best existing
<term>
robust probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy rate
</term>
from 60 % to 75 % , a 37 % reduction in error .
#23996In head-to-head tests against one of the best existing robust probabilistic parsing models, which we callP-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
other,33-3-H92-1026,ak
We use a
<term>
corpus
</term>
of bracketed
<term>
sentences
</term>
, called a
<term>
Treebank
</term>
, in combination with
<term>
decision tree building
</term>
to tease out the relevant aspects of a
<term>
parse tree
</term>
that will determine the correct
<term>
parse
</term>
of a
<term>
sentence
</term>
.
#23948We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correctparse of a sentence.
other,24-4-H92-1026,ak
This stands in contrast to the usual approach of further grammar tailoring via the usual
<term>
linguistic introspection
</term>
in the hope of generating the correct
<term>
parse
</term>
.
#23977This stands in contrast to the usual approach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correctparse.
other,20-1-H92-1026,ak
We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
#23885We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailedlinguistic information to resolve ambiguity.
model,9-5-H92-1026,ak
In head-to-head tests against one of the best existing
<term>
robust probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy rate
</term>
from 60 % to 75 % , a 37 % reduction in error .
#23988In head-to-head tests against one of the best existingrobust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
other,2-2-H92-1026,ak
<term>
HBG
</term>
incorporates
<term>
lexical , syntactic , semantic , and structural information
</term>
from the
<term>
parse tree
</term>
into the
<term>
disambiguation process
</term>
in a novel way .
#23893HBG incorporateslexical , syntactic , semantic , and structural information from the parse tree into the disambiguation process in a novel way.
other,7-1-H92-1026,ak
We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
#23872We describe a generative probabilistic model ofnatural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity.
model,24-5-H92-1026,ak
In head-to-head tests against one of the best existing
<term>
robust probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy rate
</term>
from 60 % to 75 % , a 37 % reduction in error .
#24003In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperformsP-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
other,24-1-H92-1026,ak
We describe a
<term>
generative probabilistic model
</term>
of
<term>
natural language
</term>
, which we call
<term>
HBG
</term>
, that takes advantage of detailed
<term>
linguistic information
</term>
to resolve
<term>
ambiguity
</term>
.
#23889We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolveambiguity.
model,20-5-H92-1026,ak
In head-to-head tests against one of the best existing
<term>
robust probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy rate
</term>
from 60 % to 75 % , a 37 % reduction in error .
#23999In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, theHBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.
other,13-2-H92-1026,ak
<term>
HBG
</term>
incorporates
<term>
lexical , syntactic , semantic , and structural information
</term>
from the
<term>
parse tree
</term>
into the
<term>
disambiguation process
</term>
in a novel way .
#23904HBG incorporates lexical, syntactic, semantic, and structural information from theparse tree into the disambiguation process in a novel way.
measure(ment),28-5-H92-1026,ak
In head-to-head tests against one of the best existing
<term>
robust probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy rate
</term>
from 60 % to 75 % , a 37 % reduction in error .
#24007In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing theparsing accuracy rate from 60% to 75%, a 37% reduction in error.