|
selection
</term>
. Furthermore , we propose the
|
use
|
of standard
<term>
parser evaluation methods
|
#2843
Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems. |
|
in the practical
<term>
translation
</term>
|
use
|
. The use of
<term>
NLP techniques
</term>
|
#19881
This model was practically implemented and incorporated into the English-Japanese MT system, and provided about 75% accuracy in the practical translationuse. |
|
</term>
. Unlike conventional methods that
|
use
|
<term>
hand-crafted rules
</term>
, the proposed
|
#4238
Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process. |
|
would not be enough numerous to be of any
|
use
|
. We report experiments conducted on a
<term>
|
#5907
But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. |
|
relations between
<term>
name entities
</term>
by
|
use
|
of various
<term>
lexical and syntactic features
|
#11331
This paper presents an unsupervised learning approach to disambiguate various relations between name entities by use of various lexical and syntactic features from the contexts. |
|
Unification-based grammar formalisms
</term>
|
use
|
structures containing sets of
<term>
features
|
#14626
Unification-based grammar formalismsuse structures containing sets of features to describe linguistic objects. |
|
previous
<term>
models
</term>
, which either
|
use
|
arbitrary
<term>
windows
</term>
to compute
|
#6358
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |
|
necessary and sufficient conditions for the
|
use
|
of
<term>
demonstrative expressions
</term>
|
#15178
This paper presents necessary and sufficient conditions for the use of demonstrative expressions in English and discusses implications for current discourse processing algorithms. |
|
respects : as a device to represent and to
|
use
|
different
<term>
dialog schemata
</term>
proposed
|
#12378
as a device to represent and to use different dialog schemata proposed in empirical conversation analysis; |
|
disambiguation algorithms
</term>
that did not make
|
use
|
of the
<term>
discourse constraint
</term>
|
#19327
In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint. |
|
</term>
of a
<term>
proposition
</term>
through the
|
use
|
of
<term>
test-score semantics
</term>
( Zadeh
|
#13615
Explicitation sets the stage for representing the meaning of a proposition through the use of test-score semantics (Zadeh, 1978, 1982). |
|
<term>
shared lexical information
</term>
for
|
use
|
by
<term>
Natural Language Processing ( NLP
|
#15982
This paper presents our experience in planning and building COMPLEX, a computational lexicon designed to be a repository of shared lexical information for use by Natural Language Processing (NLP) systems. |
|
summarisation
</term>
. In this paper , we
|
use
|
the
<term>
information redundancy
</term>
in
|
#7130
In this paper, we use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. |
|
natural language
</term>
, current systems
|
use
|
manual or semi-automatic methods to collect
|
#1768
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
|
the
<term>
lexicon
</term>
. Together with the
|
use
|
of a larger
<term>
training set
</term>
, these
|
#18740
Together with the use of a larger training set, these modifications combined to reduce the speech recognition word and sentence error rates by a factor of 2.5 and 1.6, respectively, on the October '91 test set. |
|
<term>
sense
</term>
depends on the context of
|
use
|
. We have recently reported on two new
<term>
|
#19172
It is well-known that there are polysemous words like sentence whose meaning or sense depends on the context of use. |
|
successfully obtained . An attempt has been made to
|
use
|
an
<term>
Augmented Transition Network
</term>
|
#12346
An attempt has been made to use an Augmented Transition Network as a procedural dialog model. |
|
similarity
</term>
between
<term>
words
</term>
or
|
use
|
<term>
lexical affinity
</term>
to create
<term>
|
#6367
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |
|
some
<term>
parsing strategies
</term>
that
|
use
|
the
<term>
control structure
</term>
, and
|
#13428
Representative samples from an entity-oriented language definition are presented, along with a control structure for an entity-oriented parser, some parsing strategies that use the control structure, and worked examples of parses. |
|
<term>
grammar coverage problems
</term>
we
|
use
|
a
<term>
fully-connected first-order statistical
|
#16909
To avoid grammar coverage problems we use a fully-connected first-order statistical class grammar. |