|
trainable sentence planner
</term>
performs better
|
than
|
the
<term>
rule-based systems
</term>
and the
|
#2108
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
that the system yields higher performance
|
than
|
a
<term>
baseline
</term>
on all three aspects
|
#6746
Results indicate that the system yields higher performance than a baseline on all three aspects. |
|
SMT models
</term>
to be significantly lower
|
than
|
that of all the dedicated
<term>
WSD models
|
#7934
We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. |
|
the proposed approach is more describable
|
than
|
other approaches such as those employing
|
#16412
We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach. |
|
theoretical accounts actually have worse coverage
|
than
|
accounts based on processing . Finally
|
#21186
This paper reviews the theoretical literature, and shows why many of the theoretical accounts actually have worse coverage than accounts based on processing. |
|
independently trained models
</term>
rather
|
than
|
the usual pooling of all the
<term>
speech
|
#17054
In addition, combination of the training speakers is done by averaging the statistics> of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. |
|
<term>
edges
</term>
adjacent to it , rather
|
than
|
all such
<term>
edges
</term>
as in conventional
|
#17624
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
. Our
<term>
algorithm
</term>
reported more
|
than
|
99 %
<term>
accuracy
</term>
in both
<term>
language
|
#1281
Our algorithm reported more than 99% accuracy in both language identification and key prediction. |
|
</term>
with
<term>
in-degree
</term>
greater
|
than
|
one and
<term>
out-degree
</term>
greater than
|
#3184
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
a more effective
<term>
CFG filter
</term>
|
than
|
that of
<term>
LTAG
</term>
. We also investigate
|
#5135
We demonstrate that an approximation of HPSG produces a more effective CFG filterthan that of LTAG. |
other,18-1-P01-1009,bq |
markers
</term>
, which includes
<term>
other (
|
than
|
)
</term>
,
<term>
such ( as )
</term>
, and
<term>
|
#1835
This paper presents a formal analysis for a large class of words called alternative markers, which includes other ( than), such (as), and besides. |
|
becomes a crucial issue recently . Rather
|
than
|
using
<term>
length-based or translation-based
|
#20543
Rather than using length-based or translation-based criterion, a part-of-speech-based criterion is proposed. |
|
has been considered to be more complicated
|
than
|
<term>
analysis
</term>
and
<term>
generation
|
#18062
The transfer phase in machine translation (MT) systems has been considered to be more complicated than analysis and generation, since it is inherently a conglomeration of individual lexical rules. |
|
rating
</term>
on average is only 5 % worse
|
than
|
the
<term>
top human-ranked sentence plan
|
#1454
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
|
non-native language essays
</term>
in less
|
than
|
100
<term>
words
</term>
. Even more illuminating
|
#644
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
|
<term>
kanji-kana characters
</term>
is greater
|
than
|
that of
<term>
erroneous chains
</term>
. From
|
#20709
In order to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese bunsetsu and an English word, and to correct these errors, this paper proposes new methods using m-th order Markov chain model for Japanese kanji-kana characters and English alphabets, assuming that Markov probability of a correct chain of syllables or kanji-kana characters is greater than that of erroneous chains. |
|
document descriptors ( keywords )
</term>
|
than
|
single
<term>
words
</term>
are . This leads
|
#20040
One of the distinguishing features of a more linguistically sophisticated representation of documents over a word set based representation of them is that linguistically sophisticated units are more frequently individually good predictors of document descriptors (keywords)than single words are. |
|
achieved by using
<term>
semantic
</term>
rather
|
than
|
<term>
syntactic categories
</term>
on the
<term>
|
#17716
A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges, thereby reducing the amount of ambiguity and thus the number of edges, since only edges with a valid semantic interpretation are ever introduced. |
|
significantly better
<term>
translation quality
</term>
|
than
|
the
<term>
statistical machine translation
|
#9380
Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation qualitythan the statistical machine translation system alone. |
|
simpler set of
<term>
model parameters
</term>
|
than
|
similar
<term>
phrase-based models
</term>
|
#3412
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parametersthan similar phrase-based models. |