other,7-4-P03-1058,bq |
On a subset of the most difficult
<term>
|
SENSEVAL-2 nouns
|
</term>
, the
<term>
accuracy
</term>
difference
|
#4876
On a subset of the most difficultSENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
lr,15-2-P03-1058,bq |
sense-tagged training data
</term>
from
<term>
|
English-Chinese parallel corpora
|
</term>
, which are then used for disambiguating
|
#4836
In this paper, we evaluate an approach to automatically acquire sense-tagged training data fromEnglish-Chinese parallel corpora, which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task. |
tech,19-1-P03-1058,bq |
sense-tagged data
</term>
required for
<term>
|
supervised learning
|
</term>
. In this paper , we evaluate an
|
#4818
A central problem of word sense disambiguation (WSD) is the lack of manually sense-tagged data required forsupervised learning. |
other,43-4-P03-1058,bq |
sense-tagged data
</term>
have in their
<term>
|
sense coverage
|
</term>
. Our analysis also highlights the
|
#4912
On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in theirsense coverage. |
other,29-2-P03-1058,bq |
disambiguating the
<term>
nouns
</term>
in the
<term>
|
SENSEVAL-2 English lexical sample task
|
</term>
. Our investigation reveals that
|
#4850
In this paper, we evaluate an approach to automatically acquire sense-tagged training data from English-Chinese parallel corpora, which are then used for disambiguating the nouns in theSENSEVAL-2 English lexical sample task. |
tech,14-5-P03-1058,bq |
domain dependence
</term>
in evaluating
<term>
|
WSD programs
|
</term>
. We describe the ongoing construction
|
#4929
Our analysis also highlights the importance of the issue of domain dependence in evaluatingWSD programs. |
measure(ment),11-4-P03-1058,bq |
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
|
accuracy
|
</term>
difference between the two approaches
|
#4880
On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
lr,11-2-P03-1058,bq |
approach to automatically acquire
<term>
|
sense-tagged training data
|
</term>
from
<term>
English-Chinese parallel
|
#4832
In this paper, we evaluate an approach to automatically acquiresense-tagged training data from English-Chinese parallel corpora, which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task. |
other,10-5-P03-1058,bq |
highlights the importance of the issue of
<term>
|
domain dependence
|
</term>
in evaluating
<term>
WSD programs
</term>
|
#4925
Our analysis also highlights the importance of the issue ofdomain dependence in evaluating WSD programs. |
other,26-2-P03-1058,bq |
are then used for disambiguating the
<term>
|
nouns
|
</term>
in the
<term>
SENSEVAL-2 English lexical
|
#4847
In this paper, we evaluate an approach to automatically acquire sense-tagged training data from English-Chinese parallel corpora, which are then used for disambiguating thenouns in the SENSEVAL-2 English lexical sample task. |
tech,5-3-P03-1058,bq |
Our investigation reveals that this
<term>
|
method of acquiring sense-tagged data
|
</term>
is promising . On a subset of the
|
#4861
Our investigation reveals that thismethod of acquiring sense-tagged data is promising. |
tech,4-1-P03-1058,bq |
of interest . A central problem of
<term>
|
word sense disambiguation ( WSD )
|
</term>
is the lack of
<term>
manually sense-tagged
|
#4803
A central problem ofword sense disambiguation ( WSD ) is the lack of manually sense-tagged data required for supervised learning. |