|
sequences
</term>
. We incorporate this analysis
|
into
|
a
<term>
diagnostic tool
</term>
intended for
|
#7648
We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems, and demonstrate how our application can be used by developers to explore patterns in machine translation output. |
|
<term>
general-purpose NLP components
</term>
|
into
|
a
<term>
machine translation pipeline
</term>
|
#11799
The LOGON MT demonstrator assembles independently valuable general-purpose NLP componentsinto a machine translation pipeline that capitalizes on output quality. |
|
create a
<term>
word-trie
</term>
, transform it
|
into
|
a
<term>
minimal DFA
</term>
, then identify
|
#3199
We create a word-trie, transform it into a minimal DFA, then identify hubs. |
|
the results of which will be incorporated
|
into
|
a
<term>
natural language generation system
|
#15233
This research is part of a larger study of anaphoric expressions, the results of which will be incorporated into a natural language generation system. |
|
of transforming a
<term>
disposition
</term>
|
into
|
a
<term>
proposition
</term>
is referred to
|
#13591
The process of transforming a dispositioninto a proposition is referred to as explicitation or restoration. |
|
scruffy texts
</term>
has been incorporated
|
into
|
a working
<term>
computer program
</term>
called
|
#13114
This method of using expectations to aid the understanding of scruffy texts has been incorporated into a working computer program called NOMAD, which understands scruffy texts in the domain of Navy messages. |
|
</term>
which takes these
<term>
features
</term>
|
into
|
account . We introduce a new
<term>
method
|
#8756
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these featuresinto account. |
|
take
<term>
contextual information
</term>
|
into
|
account . We evaluate our
<term>
paraphrase
|
#9748
We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual informationinto account. |
|
example , after
<term>
translation
</term>
|
into
|
an equivalent
<term>
RCG
</term>
, any
<term>
|
#1660
For example, after translationinto an equivalent RCG, any tree adjoining grammar can be parsed in O(n6) time. |
|
</term>
and the real tasks are also taken
|
into
|
consideration by enlarging the
<term>
separation
|
#17901
To make the proposed algorithm robust, the possible variations between the training corpus and the real tasks are also taken into consideration by enlarging the separation margin between the correct candidate and its competing members. |
|
transformed by a
<term>
planning algorithm
</term>
|
into
|
efficient
<term>
Prolog
</term>
, cf.
<term>
|
#12920
The resulting logical expression is then transformed by a planning algorithminto efficient Prolog, cf. query optimisation in a relational database. |
|
grammatical formalisms
</term>
can be translated
|
into
|
equivalent
<term>
RCGs
</term>
without increasing
|
#1644
In particular, range concatenation languages [RCL] can be parsed in polynomial time and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity. |
|
different amounts and types of information
|
into
|
its
<term>
lexicon
</term>
according to its
|
#15934
Although every natural language system needs a computational lexicon, each system puts different amounts and types of information into its lexicon according to its individual needs. |
|
</term>
and the decision of how to combine them
|
into
|
one or more
<term>
sentences
</term>
. In this
|
#1329
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
|
task , and gives them translingual reach
|
into
|
other
<term>
languages
</term>
by leveraging
|
#3626
It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology. |
|
basics of
<term>
SMT
</term>
: Theory will be put
|
into
|
practice .
<term>
STTK
</term>
, a
<term>
statistical
|
#8117
Theory will be put into practice. |
|
integrating
<term>
automatic Q/A
</term>
applications
|
into
|
real-world environments .
<term>
FERRET
</term>
|
#11650
This paper describes FERRET, an interactive question-answering (Q/A) system designed to address the challenges of integrating automatic Q/A applications into real-world environments. |
|
</term>
of the
<term>
discourse
</term>
aggregate
|
into
|
<term>
segments
</term>
, recognizing the
<term>
|
#14337
Discourse processing requires recognizing how the utterances of the discourse aggregate into segments, recognizing the intentions expressed in the discourse and the relationships among intentions, and tracking the discourse through the operation of the mechanisms associated with attentional state. |
|
</term>
can be incrementally incorporated
|
into
|
the
<term>
dictionary
</term>
after the interaction
|
#18249
Detected unknown words can be incrementally incorporated into the dictionary after the interaction with the user. |
|
information
</term>
from the
<term>
parse tree
</term>
|
into
|
the
<term>
disambiguation process
</term>
in
|
#18934
HBG incorporates lexical, syntactic, semantic, and structural information from the parse treeinto the disambiguation process in a novel way. |