|
ngram TM )
</term>
, is further proposed to
|
model
|
the
<term>
transliteration process
</term>
|
#5797
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
|
</term>
. There are several approaches that
|
model
|
<term>
information extraction
</term>
as a
<term>
|
#10804
There are several approaches that model information extraction as a token classification task, using various tagging strategies to combine multiple tokens. |
|
KPSG provides
</term>
an explicit development
|
model
|
for constructing a computational
<term>
phonological
|
#16389
The approach of KPSG provides an explicit development model for constructing a computational phonological system: speech recognition and synthesis system. |
|
authors try to reconstruct the geometric
|
model
|
of the global scene from the scenic descriptions
|
#18417
In order to understand the described world, the authors try to reconstruct the geometric model of the global scene from the scenic descriptions drawing a space. |
|
understanding
</term>
. The authors propose a
|
model
|
for analyzing
<term>
English sentences
</term>
|
#19679
The authors propose a model for analyzing English sentences including coordinate conjunctions such as and, or, but and the equivalent words. |
|
abstracting industry
</term>
. This paper proposes a
|
model
|
using
<term>
associative processors ( APs
|
#20197
This paper proposes a model using associative processors (APs) for real-time spoken language translation. |
|
response
</term>
. We have already proposed a
|
model
|
,
<term>
TDMT ( Transfer-Driven Machine Translation
|
#20233
We have already proposed a model, TDMT (Transfer-Driven Machine Translation), that translates a sentence utilizing examples effectively and performs accurate structural disambiguation and target word selection. |
|
by
<term>
extrapolation
</term>
. Thus , our
|
model
|
,
<term>
TDMT on APs
</term>
, meets the vital
|
#20351
Thus, our model, TDMT on APs, meets the vital requirements of spoken language translation. |
|
grammar model
</term>
is considered . The
|
model
|
is based on full
<term>
lexicalization
</term>
|
#21012
The model is based on full lexicalization, head-orientation via valency constraints and dependency relations, inheritance as a means for non-redundant lexicon specification, and concurrency of computation. |
measure(ment),1-8-H90-1060,bq |
target speaker
</term>
. Each
<term>
reference
|
model
|
</term>
is transformed to the
<term>
space
</term>
|
#17172
Each reference model is transformed to the space of the target speaker and combined by averaging. |
measure(ment),18-8-J05-1003,bq |
F-measure
</term>
error over the
<term>
baseline
|
model
|
’s score
</term>
of 88.2 % . The article
|
#8854
The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
model,1-1-C94-1061,bq |
</term>
and
<term>
Spanish
</term>
. A
<term>
grammar
|
model
|
</term>
for
<term>
concurrent , object-oriented
|
#20815
A grammar model for concurrent, object-oriented natural language parsing is introduced. |
model,1-2-H94-1014,bq |
</term>
or
<term>
paragraph
</term>
. The
<term>
|
model
|
</term>
is an
<term>
m-component mixture
</term>
|
#21233
Themodel is an m-component mixture of trigram models. |
model,1-2-N03-1018,bq |
</term>
of an
<term>
OCR system
</term>
. The
<term>
|
model
|
</term>
is designed for use in
<term>
error
|
#2713
Themodel is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
model,1-2-P03-1050,bq |
Arabic ) stemmer
</term>
. The
<term>
stemming
|
model
|
</term>
is based on
<term>
statistical machine
|
#4448
The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. |
model,1-2-P05-1069,bq |
machine translation ( SMT )
</term>
. The
<term>
|
model
|
</term>
predicts
<term>
blocks
</term>
with orientation
|
#9574
Themodel predicts blocks with orientation to handle local phrase re-ordering. |
model,1-3-C94-1080,bq |
computation
</term>
. The
<term>
computation
|
model
|
</term>
relies upon the
<term>
actor paradigm
|
#21043
The computation model relies upon the actor paradigm, with concurrency entering through asynchronous message passing between actors. |
model,1-3-H05-1012,bq |
performance
</term>
. The
<term>
probabilistic
|
model
|
</term>
used in the
<term>
alignment
</term>
|
#7296
The probabilistic model used in the alignment directly models the link decisions. |
model,1-4-P03-1051,bq |
given
<term>
input
</term>
. The
<term>
language
|
model
|
</term>
is initially estimated from a small
|
#4691
The language model is initially estimated from a small manually segmented corpus of about 110,000 words. |
model,10-2-H92-1016,bq |
</term>
, the use of a
<term>
bigram language
|
model
|
</term>
in conjunction with a
<term>
probabilistic
|
#18721
These include context-dependent phonetic modelling, the use of a bigram language model in conjunction with a probabilistic LR parser, and refinements made to the lexicon. |