|
non-native language essays
</term>
in less
|
than
|
100
<term>
words
</term>
. Even more illuminating
|
#644
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
|
. Our
<term>
algorithm
</term>
reported more
|
than
|
99 %
<term>
accuracy
</term>
in both
<term>
language
|
#1281
Our algorithm reported more than 99% accuracy in both language identification and key prediction. |
|
rating
</term>
on average is only 5 % worse
|
than
|
the
<term>
top human-ranked sentence plan
|
#1454
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
other,18-1-P01-1009,bq |
markers
</term>
, which includes
<term>
other (
|
than
|
)
</term>
,
<term>
such ( as )
</term>
, and
<term>
|
#1835
This paper presents a formal analysis for a large class of words called alternative markers, which includes other ( than), such (as), and besides. |
|
trainable sentence planner
</term>
performs better
|
than
|
the
<term>
rule-based systems
</term>
and the
|
#2108
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
Surprisingly , learning
<term>
phrases
</term>
longer
|
than
|
three
<term>
words
</term>
and learning
<term>
|
#2635
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
</term>
with
<term>
in-degree
</term>
greater
|
than
|
one and
<term>
out-degree
</term>
greater than
|
#3184
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
than one and
<term>
out-degree
</term>
greater
|
than
|
one . We create a
<term>
word-trie
</term>
|
#3189
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
simpler set of
<term>
model parameters
</term>
|
than
|
similar
<term>
phrase-based models
</term>
|
#3412
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parametersthan similar phrase-based models. |
|
a more effective
<term>
CFG filter
</term>
|
than
|
that of
<term>
LTAG
</term>
. We also investigate
|
#5135
We demonstrate that an approximation of HPSG produces a more effective CFG filterthan that of LTAG. |
|
that the system yields higher performance
|
than
|
a
<term>
baseline
</term>
on all three aspects
|
#6746
Results indicate that the system yields higher performance than a baseline on all three aspects. |
|
SMT models
</term>
to be significantly lower
|
than
|
that of all the dedicated
<term>
WSD models
|
#7934
We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. |
|
simultaneously using less
<term>
memory
</term>
|
than
|
is required by current
<term>
decoder
</term>
|
#9149
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memorythan is required by current decoder implementations. |
|
significantly better
<term>
translation quality
</term>
|
than
|
the
<term>
statistical machine translation
|
#9380
Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation qualitythan the statistical machine translation system alone. |
|
illustrate a framework less restrictive
|
than
|
earlier ones by allowing a
<term>
speaker
|
#14544
We want to illustrate a framework less restrictive than earlier ones by allowing a speaker leeway in forming an utterance about a task and in determining the conversational vehicle to deliver it. |
|
the proposed approach is more describable
|
than
|
other approaches such as those employing
|
#16412
We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach. |
|
independently trained models
</term>
rather
|
than
|
the usual pooling of all the
<term>
speech
|
#17054
In addition, combination of the training speakers is done by averaging the statistics> of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. |
|
<term>
edges
</term>
adjacent to it , rather
|
than
|
all such
<term>
edges
</term>
as in conventional
|
#17624
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
achieved by using
<term>
semantic
</term>
rather
|
than
|
<term>
syntactic categories
</term>
on the
<term>
|
#17716
A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges, thereby reducing the amount of ambiguity and thus the number of edges, since only edges with a valid semantic interpretation are ever introduced. |
|
has been considered to be more complicated
|
than
|
<term>
analysis
</term>
and
<term>
generation
|
#18062
The transfer phase in machine translation (MT) systems has been considered to be more complicated than analysis and generation, since it is inherently a conglomeration of individual lexical rules. |