model,5-3-C04-1103,bq |
<term>
joint source-channel transliteration
|
model
|
</term>
, also called
<term>
n-gram transliteration
|
#5781
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
model,20-2-C04-1147,bq |
<term>
terms
</term>
, an
<term>
independence
|
model
|
</term>
, and a
<term>
parametric affinity
|
#6342
The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms, an independence model, and a parametric affinity model. |
|
response
</term>
. We have already proposed a
|
model
|
,
<term>
TDMT ( Transfer-Driven Machine Translation
|
#20233
We have already proposed a model, TDMT (Transfer-Driven Machine Translation), that translates a sentence utilizing examples effectively and performs accurate structural disambiguation and target word selection. |
|
by
<term>
extrapolation
</term>
. Thus , our
|
model
|
,
<term>
TDMT on APs
</term>
, meets the vital
|
#20351
Thus, our model, TDMT on APs, meets the vital requirements of spoken language translation. |
model,3-4-C92-4207,bq |
<term>
world
</term>
. To reconstruct the
<term>
|
model
|
</term>
, the authors extract the
<term>
qualitative
|
#18457
To reconstruct themodel, the authors extract the qualitative spatial constraints from the text, and represent them as the numerical constraints on the spatial attributes of the entities. |
model,14-2-C92-1055,bq |
error
</term>
introduced by the
<term>
language
|
model
|
</term>
, traditional
<term>
statistical approaches
|
#17836
Owing to the problem of insufficient training data and approximation error introduced by the language model, traditional statistical approaches, which resolve ambiguities by indirectly and implicitly using maximum likelihood method, fail to achieve high performance in real applications. |
tech,4-5-A94-1007,bq |
<term>
English coordinate structure analysis
|
model
|
</term>
, which provides
<term>
top-down scope
|
#19792
This paper presents an English coordinate structure analysis model, which provides top-down scope information of the correct syntactic structure by taking advantage of the symmetric patterns of the parallelism. |
tech,20-4-C04-1112,bq |
<term>
accuracy
</term>
over the
<term>
wordform
|
model
|
</term>
. Also , the
<term>
WSD system based
|
#6076
Testing the lemma-based model on the Dutch SENSEVAL-2 test data, we achieve a significant increase in accuracy over the wordform model. |
model,11-3-N03-2036,bq |
</term>
and a
<term>
word-based trigram language
|
model
|
</term>
. During
<term>
training
</term>
, the
|
#3442
During decoding, we use a block unigram model and a word-based trigram language model. |
model,25-2-C04-1147,bq |
model
</term>
, and a
<term>
parametric affinity
|
model
|
</term>
. In comparison with previous
<term>
|
#6348
The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms, an independence model, and a parametric affinity model. |
model,14-1-C80-1073,bq |
Network
</term>
as a procedural
<term>
dialog
|
model
|
</term>
. The development of such a
<term>
|
#12355
An attempt has been made to use an Augmented Transition Network as a procedural dialog model. |
model,34-7-J05-1003,bq |
were not included in the original
<term>
|
model
|
</term>
. The new
<term>
model
</term>
achieved
|
#8833
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel. |
model,25-4-H94-1014,bq |
</term>
as compared to using a
<term>
trigram
|
model
|
</term>
. This paper describes a method of
|
#21289
Using the BU recognition system, experiments show a 7% improvement in recognition accuracy with the mixture trigram models as compared to using a trigram model. |
other,20-1-P06-2110,bq |
word vectors
</term>
in the
<term>
vector space
|
model
|
</term>
. Through two experiments , three
|
#11492
This paper examines what kind of similarity between words can be represented by what kind of word vectors in the vector space model. |
model,27-3-N03-2006,bq |
the possibility of using the
<term>
language
|
model
|
</term>
. We describe a simple
<term>
unsupervised
|
#3151
The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model. |
model,25-3-P05-1034,bq |
</term>
, and train a
<term>
tree-based ordering
|
model
|
</term>
. We describe an efficient
<term>
decoder
|
#9271
We align a parallel corpus, project the source dependency parse onto the target sentence, extract dependency treelet translation pairs, and train a tree-based ordering model. |
tech,9-6-P05-1067,bq |
time decoding algorithm
</term>
for the
<term>
|
model
|
</term>
. We evaluate the
<term>
outputs
</term>
|
#9507
We introduce a polynomial time decoding algorithm for themodel. |
model,44-4-N04-4028,bq |
features
</term>
of the input in a
<term>
Markov
|
model
|
</term>
. We implement several techniques
|
#6856
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
model,12-3-C04-1103,bq |
also called
<term>
n-gram transliteration
|
model
|
( ngram TM )
</term>
, is further proposed
|
#5787
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
model,7-7-J05-1003,bq |
log-likelihood
</term>
under a
<term>
baseline
|
model
|
</term>
( that of
<term>
Collins [ 1999 ]
</term>
|
#8807
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |