#649Even more illuminating was the factors on which the assessors made their decisions.
as
<term>
dialog systems
</term>
understand
more
of what the
<term>
user
</term>
tells them
#948However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user.
<term>
user
</term>
tells them , they need to be
more
sophisticated at responding to the
<term>
#960However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user.
</term>
. Our
<term>
algorithm
</term>
reported
more
than 99 %
<term>
accuracy
</term>
in both
<term>
#1280Our algorithm reported more than 99% accuracy in both language identification and key prediction.
decision of how to combine them into one or
more
<term>
sentences
</term>
. In this paper ,
#1332Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences.
black-box OCR systems
</term>
in order to make it
more
useful for
<term>
NLP tasks
</term>
. We present
#2739The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks.
</term>
, achieving similar performance to
more
complex mixtures of techniques . We present
#3223Those hubs mark the boundary between root and suffix, achieving similar performance to more complex mixtures of techniques.
<term>
Statistical approach
</term>
is much
more
robust but less accurate .
<term>
Cooperative
#3539Statistical approach is much more robust but less accurate.
the ability to spend their time finding
more
data relevant to their task , and gives
#3615It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology.
the
<term>
user model
</term>
we propose is
more
comprehensive . Specifically , we set up
#4321Unlike previous studies that focus on user's knowledge or typical kinds of users, the user model we propose is more comprehensive.
prefix * - stem-suffix * ( * denotes zero or
more
occurrences of a
<term>
morpheme
</term>
)
#4633We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme).
approximation of
<term>
HPSG
</term>
produces a
more
effective
<term>
CFG filter
</term>
than that
#5133We demonstrate that an approximation of HPSG produces a more effective CFG filter than that of LTAG.
machine translation systems
</term>
provides yet
more
<term>
redundancy
</term>
, yielding different
#5233Further, the use of multiple machine translation systems provides yet more redundancy, yielding different ways to realize that information in English.
</term>
of
<term>
sentences
</term>
with two or
more
<term>
verbs
</term>
. Previous works on
<term>
#6647The ambiguity resolution of right-side dependencies is essential for dependency parsing of sentences with two or more verbs.
the
<term>
treebank
</term>
. We argue that a
more
sophisticated and fine-grained
<term>
annotation
#7854We argue that a more sophisticated and fine-grained annotation in the treebank would have very positve effects on stochastic parsers trained on the treebank and on grammars induced from the treebank, and it would make the treebank more valuable as a source of data for theoretical linguistic investigations.
and it would make the
<term>
treebank
</term>
more
valuable as a source of data for
<term>
theoretical
#7888We argue that a more sophisticated and fine-grained annotation in the treebank would have very positve effects on stochastic parsers trained on the treebank and on grammars induced from the treebank, and it would make the treebankmore valuable as a source of data for theoretical linguistic investigations.
bilingual parallel corpora
</term>
, a much
more
commonly available resource . Using
<term>
#10171We show that this task can be done using bilingual parallel corpora, a much more commonly available resource.
computations involving the higher ( and
more
useful )
<term>
models
</term>
are hard . Since
#10961We prove that while IBM Models 1-2 are conceptually and computationally simple, computations involving the higher (and more useful) models are hard.
results can be improved using a bigger and a
more
homogeneous
<term>
corpus
</term>
to train
#12235Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author.
</term>
.
<term>
Path-based inference
</term>
is
more
efficient , while
<term>
node-based inference
#13088Path-based inference is more efficient, while node-based inference is more general.