|
utterance classification
</term>
that does
|
not
|
require
<term>
manual transcription
</term>
|
#2215
This paper describes a method for utterance classification that does not require manual transcription of training data. |
|
high-accuracy word-level alignment models
</term>
does
|
not
|
have a strong impact on performance . Learning
|
#2647
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
<term>
speech understanding
</term>
, it is
|
not
|
appropriate to decide on a single
<term>
|
#4179
Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understandingresult after each user utterance. |
|
noting that published results to date have
|
not
|
been comparable across
<term>
corpora
</term>
|
#5550
Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. |
|
</term>
. However , such an approach does
|
not
|
work well when there is no distinctive
<term>
|
#5637
However, such an approach does not work well when there is no distinctive attribute among objects. |
|
Our study reveals that the proposed method
|
not
|
only reduces an extensive system development
|
#5826
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
|
analogies between sentences
</term>
: they would
|
not
|
be enough numerous to be of any use . We
|
#5899
But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. |
|
in each author 's
<term>
corpus
</term>
tend
|
not
|
to be
<term>
synonymous expressions
</term>
|
#6177
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
|
previously ,
<term>
sentence extraction
</term>
may
|
not
|
capture the necessary
<term>
segments
</term>
|
#6243
While sentence extraction as an approach to summarization has been shown to work in documents of certain genres, because of the conversational nature of email communication where utterances are made in relation to one made previously, sentence extraction may not capture the necessary segments of dialogue that would make a summary coherent. |
|
</term>
over
<term>
parse trees
</term>
that were
|
not
|
included in the original
<term>
model
</term>
|
#8828
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
|
</term>
, can reliably determine whether or
|
not
|
they are
<term>
translations
</term>
of each
|
#9017
We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. |
|
<term>
word sense disambiguation
</term>
does
|
not
|
yield significantly better
<term>
translation
|
#9374
Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone. |
|
Translation ( SMT )
</term>
but which have
|
not
|
been addressed satisfactorily by the
<term>
|
#9946
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. |
|
and conversational features
</term>
, but do
|
not
|
change the general preference of approach
|
#10634
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
|
</term>
accessible to researchers who are
|
not
|
experts in
<term>
text mining
</term>
. As
|
#10888
InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining. |
|
words
</term>
, the system guesses correctly
|
not
|
placing
<term>
commas
</term>
with a
<term>
precision
|
#11245
After several experiments, and trained with a little corpus of 100,000 words, the system guesses correctly not placing commas with a precision of 96% and a recall of 98%. |
|
FROFF
</term>
which can make a fair copy of
|
not
|
only texts but also graphs and tables indispensable
|
#12252
In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers. |
|
direct imitation of human performance is
|
not
|
the best way to implement many of these
|
#12598
This paper defends that view, but claims that direct imitation of human performance is not the best way to implement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satisfying human communication needs. |
|
language learning
</term>
. However , this is
|
not
|
the only area in which the principles of
|
#13166
However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it. |
|
</term>
of a
<term>
sentence
</term>
, even if
|
not
|
in a precise way . Another problem with
|
#13866
Determiners play an important role in conveying the meaning of an utterance, but they have often been disregarded, perhaps because it seemed more important to devise methods to grasp the global meaning of a sentence, even if not in a precise way. |