|
A
<term>
language learning experiment
</term>
showed that
<term>
assessors
</term>
can differentiate
<term>
native from non-native language essays
</term>
in less
than
100
<term>
words
</term>
.
|
#644
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
|
Our
<term>
algorithm
</term>
reported more
than
99 %
<term>
accuracy
</term>
in both
<term>
language identification
</term>
and
<term>
key prediction
</term>
.
|
#1281
Our algorithm reported more than 99% accuracy in both language identification and key prediction. |
|
We show that the trained
<term>
SPR
</term>
learns to select a
<term>
sentence plan
</term>
whose
<term>
rating
</term>
on average is only 5 % worse
than
the
<term>
top human-ranked sentence plan
</term>
.
|
#1454
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
other,18-1-P01-1009,bq |
This paper presents a
<term>
formal analysis
</term>
for a large class of
<term>
words
</term>
called
<term>
alternative markers
</term>
, which includes
<term>
other (
than
)
</term>
,
<term>
such ( as )
</term>
, and
<term>
besides
</term>
.
|
#1835
This paper presents a formal analysis for a large class of words called alternative markers, which includes other ( than), such (as), and besides. |
|
We show that the
<term>
trainable sentence planner
</term>
performs better
than
the
<term>
rule-based systems
</term>
and the
<term>
baselines
</term>
, and as well as the
<term>
hand-crafted system
</term>
.
|
#2108
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
Surprisingly , learning
<term>
phrases
</term>
longer
than
three
<term>
words
</term>
and learning
<term>
phrases
</term>
from
<term>
high-accuracy word-level alignment models
</term>
does not have a strong impact on performance .
|
#2635
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
For our purposes , a
<term>
hub
</term>
is a
<term>
node
</term>
in a
<term>
graph
</term>
with
<term>
in-degree
</term>
greater
than
one and
<term>
out-degree
</term>
greater than one .
|
#3184
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
For our purposes , a
<term>
hub
</term>
is a
<term>
node
</term>
in a
<term>
graph
</term>
with
<term>
in-degree
</term>
greater than one and
<term>
out-degree
</term>
greater
than
one .
|
#3189
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
In this paper , we describe a
<term>
phrase-based unigram model
</term>
for
<term>
statistical machine translation
</term>
that uses a much simpler set of
<term>
model parameters
</term>
than
similar
<term>
phrase-based models
</term>
.
|
#3412
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parametersthan similar phrase-based models. |
|
We demonstrate that an approximation of
<term>
HPSG
</term>
produces a more effective
<term>
CFG filter
</term>
than
that of
<term>
LTAG
</term>
.
|
#5135
We demonstrate that an approximation of HPSG produces a more effective CFG filterthan that of LTAG. |
|
Results indicate that the system yields higher performance
than
a
<term>
baseline
</term>
on all three aspects .
|
#6746
Results indicate that the system yields higher performance than a baseline on all three aspects. |
|
We present controlled experiments showing the
<term>
WSD
</term><term>
accuracy
</term>
of current typical
<term>
SMT models
</term>
to be significantly lower
than
that of all the dedicated
<term>
WSD models
</term>
considered .
|
#7934
We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. |
|
In this paper we describe a novel
<term>
data structure
</term>
for
<term>
phrase-based statistical machine translation
</term>
which allows for the
<term>
retrieval
</term>
of arbitrarily long
<term>
phrases
</term>
while simultaneously using less
<term>
memory
</term>
than
is required by current
<term>
decoder
</term>
implementations .
|
#9149
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memorythan is required by current decoder implementations. |
|
Using a state-of-the-art
<term>
Chinese word sense disambiguation model
</term>
to choose
<term>
translation candidates
</term>
for a typical
<term>
IBM statistical MT system
</term>
, we find that
<term>
word sense disambiguation
</term>
does not yield significantly better
<term>
translation quality
</term>
than
the
<term>
statistical machine translation system
</term>
alone .
|
#9380
Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation qualitythan the statistical machine translation system alone. |
|
We want to illustrate a framework less restrictive
than
earlier ones by allowing a
<term>
speaker
</term>
leeway in forming an
<term>
utterance
</term>
about a task and in determining the conversational vehicle to deliver it .
|
#14544
We want to illustrate a framework less restrictive than earlier ones by allowing a speaker leeway in forming an utterance about a task and in determining the conversational vehicle to deliver it. |
|
We show that the proposed approach is more describable
than
other approaches such as those employing a traditional
<term>
generative phonological approach
</term>
.
|
#16412
We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach. |
|
In addition , combination of the
<term>
training speakers
</term>
is done by averaging the
<term>
statistics >
</term>
of
<term>
independently trained models
</term>
rather
than
the usual pooling of all the
<term>
speech data
</term>
from many
<term>
speakers
</term>
prior to
<term>
training
</term>
.
|
#17054
In addition, combination of the training speakers is done by averaging the statistics> of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. |
|
As each new
<term>
edge
</term>
is added to the
<term>
chart
</term>
, the algorithm checks only the topmost of the
<term>
edges
</term>
adjacent to it , rather
than
all such
<term>
edges
</term>
as in conventional treatments .
|
#17624
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
A further
<term>
reduction in the search space
</term>
is achieved by using
<term>
semantic
</term>
rather
than
<term>
syntactic categories
</term>
on the
<term>
terminal and non-terminal edges
</term>
, thereby reducing the amount of
<term>
ambiguity
</term>
and thus the number of
<term>
edges
</term>
, since only
<term>
edges
</term>
with a valid
<term>
semantic
</term>
interpretation are ever introduced .
|
#17716
A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges, thereby reducing the amount of ambiguity and thus the number of edges, since only edges with a valid semantic interpretation are ever introduced. |
|
The
<term>
transfer phase
</term>
in
<term>
machine translation ( MT ) systems
</term>
has been considered to be more complicated
than
<term>
analysis
</term>
and
<term>
generation
</term>
, since it is inherently a conglomeration of individual
<term>
lexical rules
</term>
.
|
#18062
The transfer phase in machine translation (MT) systems has been considered to be more complicated than analysis and generation, since it is inherently a conglomeration of individual lexical rules. |