|
of
<term>
storage media and networks
</term>
|
one
|
could just record and store a
<term>
conversation
|
#25
Given the development of storage media and networksone could just record and store a conversation for documentation. |
|
and/or evaluated : Similar to activities
|
one
|
can define subsets of larger
<term>
database
|
#141
Several extensions of this basic idea are being discussed and/or evaluated: Similar to activities one can define subsets of larger database and detect those automatically which is shown on a large database of TV shows. |
|
set of inter-related but distinct tasks ,
|
one
|
of which is
<term>
sentence scoping
</term>
|
#1304
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
|
the decision of how to combine them into
|
one
|
or more
<term>
sentences
</term>
. In this
|
#1330
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
|
adopt fundamentally different strategies ,
|
one
|
utilizing primarily
<term>
knowledge-based
|
#2360
The answering agents adopt fundamentally different strategies, one utilizing primarily knowledge-based mechanisms and the other adopting statistical techniques. |
|
</term>
with
<term>
in-degree
</term>
greater than
|
one
|
and
<term>
out-degree
</term>
greater than
|
#3185
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
one and
<term>
out-degree
</term>
greater than
|
one
|
. We create a
<term>
word-trie
</term>
, transform
|
#3190
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
inflected languages
</term>
provided that
|
one
|
can create a small
<term>
manually segmented
|
#4785
We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. |
|
contains . We give two estimates , a lower
|
one
|
and a higher one . As an
<term>
analogy
</term>
|
#5937
We give two estimates, a lower one and a higher one. |
|
two estimates , a lower one and a higher
|
one
|
. As an
<term>
analogy
</term>
must be valid
|
#5941
We give two estimates, a lower one and a higher one. |
|
</term>
of an
<term>
ambiguous word
</term>
in
|
one
|
<term>
classifier
</term>
, therefore augmenting
|
#6042
The advantage of this novel method is that it clusters all inflected forms of an ambiguous word in one classifier, therefore augmenting the training material available to the algorithm. |
|
. Our approach is based on the idea that
|
one
|
person tends to use one
<term>
expression
|
#6147
Our approach is based on the idea that one person tends to use one expression for one meaning. |
|
on the idea that one person tends to use
|
one
|
<term>
expression
</term>
for one
<term>
meaning
|
#6152
Our approach is based on the idea that one person tends to use one expression for one meaning. |
|
tends to use one
<term>
expression
</term>
for
|
one
|
<term>
meaning
</term>
. According to our assumption
|
#6155
Our approach is based on the idea that one person tends to use one expression for one meaning. |
|
utterances
</term>
are made in relation to
|
one
|
made previously ,
<term>
sentence extraction
|
#6236
While sentence extraction as an approach to summarization has been shown to work in documents of certain genres, because of the conversational nature of email communication where utterances are made in relation to one made previously, sentence extraction may not capture the necessary segments of dialogue that would make a summary coherent. |
|
classifiers
</term>
to form a highly accurate
|
one
|
. Experiments show that this approach is
|
#7054
Boosting, the method in question, combines the moderately accurate hypotheses of several classifiers to form a highly accurate one. |
|
machine translation ( SMT )
</term>
is currently
|
one
|
of the hot spots in
<term>
natural language
|
#7995
Statistical machine translation (SMT) is currently one of the hot spots in natural language processing. |
|
</term>
, we show how
<term>
paraphrases
</term>
in
|
one
|
<term>
language
</term>
can be identified using
|
#9701
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. |
other,6-2-E06-1018,bq |
represents an instantiation of the
<term>
|
one
|
sense per collocation observation
</term>
|
#10112
It represents an instantiation of theone sense per collocation observation (Gale et al., 1992). |
other,16-4-E06-1018,bq |
that it enhances the effect of the
<term>
|
one
|
sense per collocation observation
</term>
|
#10152
This approach differs from other approaches to WSI in that it enhances the effect of theone sense per collocation observation by using triplets of words instead of pairs. |