|
logic program
</term>
is proposed . It is
|
also
|
a drastic generalization of
<term>
chart
|
#20893
It is also a drastic generalization of chart Parsing, partial instantiation of clauses in a program roughly corresponding to arcs in a chart. |
|
algorithm
</term>
. In addition , it could
|
also
|
be used to help evaluate
<term>
disambiguation
|
#19315
In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint. |
|
machine translation task
</term>
, which can
|
also
|
be viewed as a
<term>
stochastic tree-to-tree
|
#9489
Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. |
|
source-channel transliteration model
</term>
,
|
also
|
called
<term>
n-gram transliteration model
|
#5783
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
|
</term>
in
<term>
compound nouns
</term>
, but
|
also
|
can find the correct candidates for the
|
#20434
This method can not only detect Japanese homophone errors in compound nouns, but also can find the correct candidates for the detected errors automatically. |
|
<term>
sentence
</term>
again . This method is
|
also
|
capable of handling
<term>
unknown words
</term>
|
#18228
This method is also capable of handling unknown words, which is important in practical systems. |
|
and the
<term>
typing location
</term>
can be
|
also
|
changed in lateral or longitudinal directions
|
#12284
Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. |
|
interface
</term>
for browsing and editing was
|
also
|
designed and implemented . The principle
|
#17330
A customized interface for browsing and editing was also designed and implemented. |
|
approximations
</term>
for these computations . We
|
also
|
discuss some practical ways of dealing
|
#10077
We also discuss some practical ways of dealing with complexity. |
|
aspects of
<term>
language learning
</term>
are
|
also
|
discussed . Current
<term>
natural language
|
#12524
Implications towards automating certain aspects of language learning are also discussed. |
|
vital to
<term>
machine translation
</term>
are
|
also
|
discussed together with various interesting
|
#13314
Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation are also discussed together with various interesting ambiguities. |
|
different
<term>
inference types
</term>
. The paper
|
also
|
discusses how
<term>
memory
</term>
is structured
|
#12020
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
given biased
<term>
gold standard
</term>
it
|
also
|
enables
<term>
automatic parameter optimization
|
#10225
Offering advantages like reproducability and independency of a given biased gold standard it also enables automatic parameter optimization of the WSI algorithm. |
|
to perform an exhaustive comparison , we
|
also
|
evaluate a
<term>
hand-crafted template-based
|
#2080
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners. |
|
English
</term>
and
<term>
Chinese
</term>
, and
|
also
|
exploits the large amount of
<term>
Chinese
|
#6978
Our method takes advantage of the different way in which word senses are lexicalised in English and Chinese, and also exploits the large amount of Chinese text available in corpora and on the Web. |
|
</term>
, and
<term>
word matchings
</term>
, are
|
also
|
factored in by modifying the
<term>
transition
|
#21351
Other contextual clues, such as editing terms, word fragments, and word matchings, are also factored in by modifying the transition probabilities. |
|
indicators for the top-level prediction task . We
|
also
|
find that the
<term>
transcription errors
|
#10609
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
|
96 % and a
<term>
recall
</term>
of 98 % . It
|
also
|
gets a
<term>
precision
</term>
of 70 % and
|
#11262
It also gets a precision of 70% and a recall of 49% in the task of placing commas. |
|
can make a fair copy of not only texts but
|
also
|
graphs and tables indispensable to our
|
#12256
In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers. |
|
<term>
sense coverage
</term>
. Our analysis
|
also
|
highlights the importance of the issue
|
#4917
Our analysis also highlights the importance of the issue of domain dependence in evaluating WSD programs. |