lr-prod,15-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#2994
Using these ideas together, the resulting tagger gives a 97.24% accuracy on thePenn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),12-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),20-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#2999
Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, anerror reduction of 4.4% on the best previous single automatically learned tagging result. |
model,55-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2964
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors inconditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,22-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2931
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and followingtag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,26-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2935
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via adependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,36-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2945
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use oflexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,53-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2962
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use ofpriors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,66-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2975
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling ofunknown word features. |
tech,32-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#3011
Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learnedtagging result. |
tech,4-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2913
We present a newpart-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
tech,40-1-N03-1033,bq |
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2949
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, includingjointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
tech,7-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#2986
Using these ideas together, the resultingtagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |