other,4-1-P05-1010,ak This paper defines a <term> generative probabilistic model </term> of <term> parse trees </term> , which we call <term> PCFG-LA </term> .
other,8-1-P05-1010,ak This paper defines a <term> generative probabilistic model </term> of <term> parse trees </term> , which we call <term> PCFG-LA </term> .
other,14-1-P05-1010,ak This paper defines a <term> generative probabilistic model </term> of <term> parse trees </term> , which we call <term> PCFG-LA </term> .
other,1-2-P05-1010,ak This <term> model </term> is an extension of <term> PCFG </term> in which <term> non-terminal symbols </term> are augmented with <term> latent variables </term> .
other,6-2-P05-1010,ak This <term> model </term> is an extension of <term> PCFG </term> in which <term> non-terminal symbols </term> are augmented with <term> latent variables </term> .
other,9-2-P05-1010,ak This <term> model </term> is an extension of <term> PCFG </term> in which <term> non-terminal symbols </term> are augmented with <term> latent variables </term> .
other,14-2-P05-1010,ak This <term> model </term> is an extension of <term> PCFG </term> in which <term> non-terminal symbols </term> are augmented with <term> latent variables </term> .
model,0-3-P05-1010,ak This <term> model </term> is an extension of <term> PCFG </term> in which <term> non-terminal symbols </term> are augmented with <term> latent variables </term> . <term> Fine-grained CFG rules </term> are automatically induced from a <term> parsed corpus </term> by training a <term> PCFG-LA model </term> using an <term> EM-algorithm </term> .
lr,8-3-P05-1010,ak <term> Fine-grained CFG rules </term> are automatically induced from a <term> parsed corpus </term> by training a <term> PCFG-LA model </term> using an <term> EM-algorithm </term> .
model,13-3-P05-1010,ak <term> Fine-grained CFG rules </term> are automatically induced from a <term> parsed corpus </term> by training a <term> PCFG-LA model </term> using an <term> EM-algorithm </term> .
tech,17-3-P05-1010,ak <term> Fine-grained CFG rules </term> are automatically induced from a <term> parsed corpus </term> by training a <term> PCFG-LA model </term> using an <term> EM-algorithm </term> .
tech,1-4-P05-1010,ak Because <term> exact parsing </term> with a <term> PCFG-LA </term> is NP-hard , several approximations are described and empirically compared .
other,5-4-P05-1010,ak Because <term> exact parsing </term> with a <term> PCFG-LA </term> is NP-hard , several approximations are described and empirically compared .
lr-prod,4-5-P05-1010,ak In experiments using the <term> Penn WSJ corpus </term> , our <term> automatically trained model </term> gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an <term> unlexicalized PCFG parser </term> created using extensive <term> manual feature selection </term> .
model,9-5-P05-1010,ak In experiments using the <term> Penn WSJ corpus </term> , our <term> automatically trained model </term> gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an <term> unlexicalized PCFG parser </term> created using extensive <term> manual feature selection </term> .
tech,34-5-P05-1010,ak In experiments using the <term> Penn WSJ corpus </term> , our <term> automatically trained model </term> gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an <term> unlexicalized PCFG parser </term> created using extensive <term> manual feature selection </term> .
tech,40-5-P05-1010,ak In experiments using the <term> Penn WSJ corpus </term> , our <term> automatically trained model </term> gave a performance of 86.6 % ( Fa5 , sentences a6 40 words ) , which is comparable to that of an <term> unlexicalized PCFG parser </term> created using extensive <term> manual feature selection </term> .
hide detail