#2216This paper describes a method for utterance classification that does not require manual transcription of training data.
high-accuracy word-level alignment models
</term>
does
not
have a strong impact on performance . Learning
#2648Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance.
<term>
speech understanding
</term>
, it is
not
appropriate to decide on a single
<term>
#4180Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understanding result after each user utterance.
this
<term>
pronoun
</term>
, for which it does
not
make sense to look for an
<term>
antecedent
#6169This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent.
shift-reduce dependency parsers
</term>
may
not
guarantee the connectivity of a
<term>
dependency
#6657Previous works on shift-reduce dependency parsers may not guarantee the connectivity of a dependency tree due to their weakness at resolving the right-side dependencies.
%
<term>
accuracy
</term>
and that there is
not
a large performance difference between
#7764Experimental results showed that the proposed method achieves almost 60% accuracy and that there is not a large performance difference between the two models.
</term>
over
<term>
parse trees
</term>
that were
not
included in the original
<term>
model
</term>
#8193The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model.
</term>
, can reliably determinewhether or
not
they are
<term>
translations
</term>
of each
#8381We train a maximum entropy classifier that, given a pair of sentences, can reliably determinewhether or not they are translations of each other.
<term>
word sense disambiguation
</term>
does
not
yield significantly better
<term>
translation
#9231Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone.
reading materials would enable learners
not
only to study the
<term>
target vocabulary
#10830The organized reading materials would enable learners not only to study the target vocabulary efficiently but also to gain a variety of knowledge through reading.
Translation ( SMT )
</term>
but which have
not
been addressed satisfactorily by the
<term>
#10883In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community.
and conversational features
</term>
, but do
not
change the general preference of approach
#11571We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks.
</term>
accessible to researchers who are
not
experts in
<term>
text mining
</term>
. As
#11825InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining.
100,000 words , the system guesses correctly
not
placing
<term>
commas
</term>
with a
<term>
precision
#12182After several experiments, and trained with a little corpus of 100,000 words, the system guesses correctly not placing commas with a precision of 96% and a recall of 98%.
FROFF
</term>
which can make a fair copy of
not
only
<term>
texts
</term>
but also graphs and
#13189In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers.
imitation of
<term>
human performance
</term>
is
not
the best way to implement many of these
#13535This paper defends that view, but claims that direct imitation of human performance is not the best way to implement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satisfying human communication needs.
operations
</term>
on
<term>
SI-Nets
</term>
are
not
merely isomorphic to single
<term>
epistemological
#14485In this sense, operations on SI-Nets are not merely isomorphic to single epistemological objects, but can be viewed as a simulation of processes on a different level, that pertaining to the conceptual system of NL.
MPS grammars
</term>
, unfortunately , are
not
computationally safe . We evaluate several
#14747Unconstrained MPS grammars, unfortunately, are not computationally safe.
language learning
</term>
. However , this is
not
the only area in which the principles of
#14882However, this is not the only area in which the principles of the system might be used, and the aim in building it was simply to demonstrate the workability of the general mechanism, and provide a framework for assessing developments of it.
proposition
</term>
which is preponderantly , but
not
necessarily always , true . For example
#15182Informally, a disposition is a proposition which is preponderantly, but not necessarily always, true.