other,18-1-P01-1009,bq |
markers
</term>
, which includes
<term>
other (
|
than
|
)
</term>
,
<term>
such ( as )
</term>
, and
<term>
|
#1835
This paper presents a formal analysis for a large class of words called alternative markers, which includes other ( than), such (as), and besides. |
|
non-native language essays
</term>
in less
|
than
|
100
<term>
words
</term>
. Even more illuminating
|
#644
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
|
. Our
<term>
algorithm
</term>
reported more
|
than
|
99 %
<term>
accuracy
</term>
in both
<term>
language
|
#1281
Our algorithm reported more than 99% accuracy in both language identification and key prediction. |
|
that the system yields higher performance
|
than
|
a
<term>
baseline
</term>
on all three aspects
|
#6746
Results indicate that the system yields higher performance than a baseline on all three aspects. |
|
theoretical accounts actually have worse coverage
|
than
|
accounts based on processing . Finally
|
#21186
This paper reviews the theoretical literature, and shows why many of the theoretical accounts actually have worse coverage than accounts based on processing. |
|
<term>
edges
</term>
adjacent to it , rather
|
than
|
all such
<term>
edges
</term>
as in conventional
|
#17624
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
has been considered to be more complicated
|
than
|
<term>
analysis
</term>
and
<term>
generation
|
#18062
The transfer phase in machine translation (MT) systems has been considered to be more complicated than analysis and generation, since it is inherently a conglomeration of individual lexical rules. |
|
illustrate a framework less restrictive
|
than
|
earlier ones by allowing a
<term>
speaker
|
#14544
We want to illustrate a framework less restrictive than earlier ones by allowing a speaker leeway in forming an utterance about a task and in determining the conversational vehicle to deliver it. |
|
from individual
<term>
phrases
</term>
rather
|
than
|
from the
<term>
weighted sum
</term>
of a
<term>
|
#20058
This leads us to consider the assignment of descriptors from individual phrases rather than from the weighted sum of a word set representation. |
|
simultaneously using less
<term>
memory
</term>
|
than
|
is required by current
<term>
decoder
</term>
|
#9149
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memorythan is required by current decoder implementations. |
|
than one and
<term>
out-degree
</term>
greater
|
than
|
one . We create a
<term>
word-trie
</term>
|
#3189
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
</term>
with
<term>
in-degree
</term>
greater
|
than
|
one and
<term>
out-degree
</term>
greater than
|
#3184
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
the proposed approach is more describable
|
than
|
other approaches such as those employing
|
#16412
We show that the proposed approach is more describable than other approaches such as those employing a traditional generative phonological approach. |
|
simpler set of
<term>
model parameters
</term>
|
than
|
similar
<term>
phrase-based models
</term>
|
#3412
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parametersthan similar phrase-based models. |
|
document descriptors ( keywords )
</term>
|
than
|
single
<term>
words
</term>
are . This leads
|
#20040
One of the distinguishing features of a more linguistically sophisticated representation of documents over a word set based representation of them is that linguistically sophisticated units are more frequently individually good predictors of document descriptors (keywords)than single words are. |
|
achieved by using
<term>
semantic
</term>
rather
|
than
|
<term>
syntactic categories
</term>
on the
<term>
|
#17716
A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges, thereby reducing the amount of ambiguity and thus the number of edges, since only edges with a valid semantic interpretation are ever introduced. |
|
SMT models
</term>
to be significantly lower
|
than
|
that of all the dedicated
<term>
WSD models
|
#7934
We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. |
|
<term>
kanji-kana characters
</term>
is greater
|
than
|
that of
<term>
erroneous chains
</term>
. From
|
#20709
In order to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese bunsetsu and an English word, and to correct these errors, this paper proposes new methods using m-th order Markov chain model for Japanese kanji-kana characters and English alphabets, assuming that Markov probability of a correct chain of syllables or kanji-kana characters is greater than that of erroneous chains. |
|
a more effective
<term>
CFG filter
</term>
|
than
|
that of
<term>
LTAG
</term>
. We also investigate
|
#5135
We demonstrate that an approximation of HPSG produces a more effective CFG filterthan that of LTAG. |
|
trainable sentence planner
</term>
performs better
|
than
|
the
<term>
rule-based systems
</term>
and the
|
#2108
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |