This paper defines a
<term>
generative probabilistic model
</term>
of
<term>
parse trees
</term>
, which we call
<term>
PCFG-LA
</term>
.
#8483This paper defines agenerative probabilistic model of parse trees, which we call PCFG-LA.
other,8-1-P05-1010,ak
This paper defines a
<term>
generative probabilistic model
</term>
of
<term>
parse trees
</term>
, which we call
<term>
PCFG-LA
</term>
.
#8487This paper defines a generative probabilistic model ofparse trees, which we call PCFG-LA.
other,14-1-P05-1010,ak
This paper defines a
<term>
generative probabilistic model
</term>
of
<term>
parse trees
</term>
, which we call
<term>
PCFG-LA
</term>
.
#8493This paper defines a generative probabilistic model of parse trees, which we callPCFG-LA.
other,1-2-P05-1010,ak
This
<term>
model
</term>
is an extension of
<term>
PCFG
</term>
in which
<term>
non-terminal symbols
</term>
are augmented with
<term>
latent variables
</term>
.
#8496Thismodel is an extension of PCFG in which non-terminal symbols are augmented with latent variables.
other,6-2-P05-1010,ak
This
<term>
model
</term>
is an extension of
<term>
PCFG
</term>
in which
<term>
non-terminal symbols
</term>
are augmented with
<term>
latent variables
</term>
.
#8501This model is an extension ofPCFG in which non-terminal symbols are augmented with latent variables.
other,9-2-P05-1010,ak
This
<term>
model
</term>
is an extension of
<term>
PCFG
</term>
in which
<term>
non-terminal symbols
</term>
are augmented with
<term>
latent variables
</term>
.
#8504This model is an extension of PCFG in whichnon-terminal symbols are augmented with latent variables.
other,14-2-P05-1010,ak
This
<term>
model
</term>
is an extension of
<term>
PCFG
</term>
in which
<term>
non-terminal symbols
</term>
are augmented with
<term>
latent variables
</term>
.
#8509This model is an extension of PCFG in which non-terminal symbols are augmented withlatent variables.
model,0-3-P05-1010,ak
This
<term>
model
</term>
is an extension of
<term>
PCFG
</term>
in which
<term>
non-terminal symbols
</term>
are augmented with
<term>
latent variables
</term>
.
<term>
Fine-grained CFG rules
</term>
are automatically induced from a
<term>
parsed corpus
</term>
by training a
<term>
PCFG-LA model
</term>
using an
<term>
EM-algorithm
</term>
.
#8512This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables.Fine-grained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm.
lr,8-3-P05-1010,ak
<term>
Fine-grained CFG rules
</term>
are automatically induced from a
<term>
parsed corpus
</term>
by training a
<term>
PCFG-LA model
</term>
using an
<term>
EM-algorithm
</term>
.
#8520Fine-grained CFG rules are automatically induced from aparsed corpus by training a PCFG-LA model using an EM-algorithm.
model,13-3-P05-1010,ak
<term>
Fine-grained CFG rules
</term>
are automatically induced from a
<term>
parsed corpus
</term>
by training a
<term>
PCFG-LA model
</term>
using an
<term>
EM-algorithm
</term>
.
#8525Fine-grained CFG rules are automatically induced from a parsed corpus by training aPCFG-LA model using an EM-algorithm.
tech,17-3-P05-1010,ak
<term>
Fine-grained CFG rules
</term>
are automatically induced from a
<term>
parsed corpus
</term>
by training a
<term>
PCFG-LA model
</term>
using an
<term>
EM-algorithm
</term>
.
#8529Fine-grained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using anEM-algorithm.
tech,1-4-P05-1010,ak
Because
<term>
exact parsing
</term>
with a
<term>
PCFG-LA
</term>
is NP-hard , several approximations are described and empirically compared .
#8532Becauseexact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared.
other,5-4-P05-1010,ak
Because
<term>
exact parsing
</term>
with a
<term>
PCFG-LA
</term>
is NP-hard , several approximations are described and empirically compared .
#8536Because exact parsing with aPCFG-LA is NP-hard, several approximations are described and empirically compared.
lr-prod,4-5-P05-1010,ak
In experiments using the
<term>
Penn WSJ corpus
</term>
, our
<term>
automatically trained model
</term>
gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an
<term>
unlexicalized PCFG parser
</term>
created using extensive
<term>
manual feature selection
</term>
.
#8552In experiments using thePenn WSJ corpus, our automatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
model,9-5-P05-1010,ak
In experiments using the
<term>
Penn WSJ corpus
</term>
, our
<term>
automatically trained model
</term>
gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an
<term>
unlexicalized PCFG parser
</term>
created using extensive
<term>
manual feature selection
</term>
.
#8557In experiments using the Penn WSJ corpus, ourautomatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
tech,34-5-P05-1010,ak
In experiments using the
<term>
Penn WSJ corpus
</term>
, our
<term>
automatically trained model
</term>
gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an
<term>
unlexicalized PCFG parser
</term>
created using extensive
<term>
manual feature selection
</term>
.
#8582In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of anunlexicalized PCFG parser created using extensive manual feature selection.
tech,40-5-P05-1010,ak
In experiments using the
<term>
Penn WSJ corpus
</term>
, our
<term>
automatically trained model
</term>
gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an
<term>
unlexicalized PCFG parser
</term>
created using extensive
<term>
manual feature selection
</term>
.
#8588In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensivemanual feature selection.