|
in less than 100
<term>
words
</term>
. Even
|
more
|
illuminating was the factors on which the
|
#649
Even more illuminating was the factors on which the assessors made their decisions. |
|
as
<term>
dialog systems
</term>
understand
|
more
|
of what the
<term>
user
</term>
tells them
|
#948
However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user. |
|
<term>
user
</term>
tells them , they need to be
|
more
|
sophisticated at responding to the
<term>
|
#960
However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user. |
|
</term>
. Our
<term>
algorithm
</term>
reported
|
more
|
than 99 %
<term>
accuracy
</term>
in both
<term>
|
#1280
Our algorithm reported more than 99% accuracy in both language identification and key prediction. |
|
decision of how to combine them into one or
|
more
|
<term>
sentences
</term>
. In this paper ,
|
#1332
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
|
<term>
OCR systems
</term>
in order to make it
|
more
|
useful for
<term>
NLP tasks
</term>
. We present
|
#2738
The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
|
achieving similar
<term>
performance
</term>
to
|
more
|
complex mixtures of techniques . We present
|
#3222
Those hubs mark the boundary between root and suffix, achieving similar performance to more complex mixtures of techniques. |
|
<term>
Statistical approach
</term>
is much
|
more
|
robust but less accurate .
<term>
Cooperative
|
#3538
Statistical approach is much more robust but less accurate. |
|
the ability to spend their time finding
|
more
|
data relevant to their task , and gives
|
#3614
It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology. |
|
the
<term>
user model
</term>
we propose is
|
more
|
comprehensive . Specifically , we set up
|
#4319
Unlike previous studies that focus on user's knowledge or typical kinds of users, the user model we propose is more comprehensive. |
|
stem-suffix *
</term>
( * denotes zero or
|
more
|
occurrences of a
<term>
morpheme
</term>
)
|
#4631
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
|
approximation of
<term>
HPSG
</term>
produces a
|
more
|
effective
<term>
CFG filter
</term>
than that
|
#5131
We demonstrate that an approximation of HPSG produces a more effective CFG filter than that of LTAG. |
|
system based on lemmas
</term>
is smaller and
|
more
|
robust . We present a
<term>
text mining
|
#6089
Also, the WSD system based on lemmas is smaller and more robust. |
|
machine translation systems
</term>
provides yet
|
more
|
<term>
redundancy
</term>
, yielding different
|
#7207
Further, the use of multiple machine translation systems provides yet more redundancy, yielding different ways to realize that information in English. |
|
bilingual parallel corpora
</term>
, a much
|
more
|
commonly available
<term>
resource
</term>
|
#9682
We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. |
|
computations involving the higher ( and
|
more
|
useful )
<term>
models
</term>
are
<term>
hard
|
#10024
We prove that while IBM Models 1-2 are conceptually and computationally simple, computations involving the higher (and more useful) models are hard. |
|
results can be improved using a bigger and a
|
more
|
homogeneous
<term>
corpus
</term>
to train
|
#11298
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |
|
</term>
.
<term>
Path-based inference
</term>
is
|
more
|
efficient , while
<term>
node-based inference
|
#12151
Path-based inference is more efficient, while node-based inference is more general. |
|
while
<term>
node-based inference
</term>
is
|
more
|
general . A method is described of combining
|
#12158
Path-based inference is more efficient, while node-based inference is more general. |
|
techniques and the still valuable methods of
|
more
|
traditional
<term>
natural language interfaces
|
#12660
The paper proposes interfaces based on a judicious mixture of these techniques and the still valuable methods of more traditional natural language interfaces. |