|
source-channel transliteration model
</term>
,
|
also
|
called
<term>
n-gram transliteration model
|
#5783
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
|
<term>
sense coverage
</term>
. Our analysis
|
also
|
highlights the importance of the issue
|
#4917
Our analysis also highlights the importance of the issue of domain dependence in evaluating WSD programs. |
|
English
</term>
and
<term>
Chinese
</term>
, and
|
also
|
exploits the large amount of
<term>
Chinese
|
#6978
Our method takes advantage of the different way in which word senses are lexicalised in English and Chinese, and also exploits the large amount of Chinese text available in corpora and on the Web. |
|
</term>
, and
<term>
word matchings
</term>
, are
|
also
|
factored in by modifying the
<term>
transition
|
#21351
Other contextual clues, such as editing terms, word fragments, and word matchings, are also factored in by modifying the transition probabilities. |
|
view of
<term>
language definition
</term>
are
|
also
|
noted . Representative samples from an
<term>
|
#13401
Several advantages from the point of view of language definition are also noted. |
|
aspects of
<term>
language learning
</term>
are
|
also
|
discussed . Current
<term>
natural language
|
#12524
Implications towards automating certain aspects of language learning are also discussed. |
|
training corpus
</term>
and the real tasks are
|
also
|
taken into consideration by enlarging the
|
#17899
To make the proposed algorithm robust, the possible variations between the training corpus and the real tasks are also taken into consideration by enlarging the separation margin between the correct candidate and its competing members. |
|
vital to
<term>
machine translation
</term>
are
|
also
|
discussed together with various interesting
|
#13314
Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation are also discussed together with various interesting ambiguities. |
|
model ’s score
</term>
of 88.2 % . The article
|
also
|
introduces a new
<term>
algorithm
</term>
for
|
#8863
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
|
and the
<term>
typing location
</term>
can be
|
also
|
changed in lateral or longitudinal directions
|
#12284
Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. |
|
</term>
in
<term>
compound nouns
</term>
, but
|
also
|
can find the correct candidates for the
|
#20434
This method can not only detect Japanese homophone errors in compound nouns, but also can find the correct candidates for the detected errors automatically. |
|
field of
<term>
speech processing
</term>
, but
|
also
|
in the related areas of
<term>
Human-Machine
|
#18636
The paper provides an overview of the research conducted at LIMSI in the field of speech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. |
|
target
<term>
recognition task
</term>
, but
|
also
|
that it is possible to get bigger performance
|
#3059
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
extensive system development effort but
|
also
|
improves the
<term>
transliteration accuracy
|
#5835
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
|
can make a fair copy of not only texts but
|
also
|
graphs and tables indispensable to our
|
#12256
In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers. |
|
database
</term>
.
<term>
Requestors
</term>
can
|
also
|
instruct the
<term>
system
</term>
to notify
|
#866
Requestors can also instruct the system to notify them when the status of a request changes or when a request is complete. |
|
machine translation task
</term>
, which can
|
also
|
be viewed as a
<term>
stochastic tree-to-tree
|
#9489
Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. |
|
algorithm
</term>
. In addition , it could
|
also
|
be used to help evaluate
<term>
disambiguation
|
#19315
In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint. |
|
of
<term>
CMU 's SMT system
</term>
. It has
|
also
|
successfully been coupled with
<term>
rule-based
|
#8170
It has also successfully been coupled with rule-based and example based machine translation modules to build a multi engine machine translation system. |
|
research
</term>
. This piece of work has
|
also
|
laid a foundation for exploring and harvesting
|
#8298
This piece of work has also laid a foundation for exploring and harvesting English-Chinese bitexts in a larger volume from the Web. |