|
these
<term>
evaluation techniques
</term>
|
will
|
provide information about both the
<term>
|
#587
We believe that these evaluation techniqueswill provide information about both the human language learning process, the translation process and the development of machine translation systems. |
|
information sources
</term>
. We have built and
|
will
|
demonstrate an application of this approach
|
#818
We have built and will demonstrate an application of this approach called LCS-Marine. |
|
<term>
free text
</term>
. The demonstration
|
will
|
focus on how
<term>
JAVELIN
</term>
processes
|
#3664
The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus. |
|
<term>
genre
</term>
. Examples and results
|
will
|
be given for
<term>
Arabic
</term>
, but the
|
#4513
Examples and results will be given for Arabic, but the approach is applicable to any language that needs affix removal. |
|
of these systems ,
<term>
accuracy
</term>
|
will
|
always be imperfect . For many reasons
|
#6782
Despite the successes of these systems, accuracywill always be imperfect. |
|
results
</term>
in a short time . The tutorial
|
will
|
cover the basics of
<term>
SMT
</term>
: Theory
|
#8106
The tutorial will cover the basics of SMT: |
|
<term>
users
</term>
of our
<term>
tool
</term>
|
will
|
drive a
<term>
syntax-based decoder
</term>
|
#9914
In our demonstration at ACL, new users of our toolwill drive a syntax-based decoder for themselves. |
|
<term>
sentences
</term>
. In this paper , we
|
will
|
present a new
<term>
evaluation measure
</term>
|
#10359
In this paper, we will present a new evaluation measure which explicitly models block reordering as an edit operation. |
|
<term>
natural language interfaces
</term>
|
will
|
never appear cooperative or graceful unless
|
#12561
While such decoding is an essential underpinning, much recent work suggests that natural language interfaceswill never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication, such as robust communication procedures. |
|
assumption that the input
<term>
text
</term>
|
will
|
be in reasonably neat form , e.g. ,
<term>
|
#12958
Most large text-understanding systems have been designed under the assumption that the input textwill be in reasonably neat form, e.g., newspaper stories and other edited texts. |
|
basics of
<term>
monolingual UCG
</term>
, we
|
will
|
show how the two can be integrated , and
|
#15138
After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bi-directional Engllsh-Spanish fragment. |
|
expressions
</term>
, the results of which
|
will
|
be incorporated into a
<term>
natural language
|
#15230
This research is part of a larger study of anaphoric expressions, the results of which will be incorporated into a natural language generation system. |
|
it is actually possible , and after that
|
will
|
lead to predictions of missing
<term>
fragments
|
#15559
We shall introduce the concept of a chart that works outward from islands and makes sense of as much of the sentence as it is actually possible, and after that will lead to predictions of missing fragments. |
|
3-character Chinese names without title
</term>
. We
|
will
|
show the experimental results for two
<term>
|
#18339
We will show the experimental results for two corpora and compare them with the results by the NTHU's statistic-based system, the only system that we know has attacked the same problem. |
|
aspects of a
<term>
parse tree
</term>
that
|
will
|
determine the correct
<term>
parse
</term>
|
#18972
We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. |
|
</term>
, it is extremely likely that they
|
will
|
all share the same
<term>
sense
</term>
. This
|
#19253
That is, if a polysemous word such as sentence appears two or more times in a well-written discourse, it is extremely likely that they will all share the same sense. |
|
target word selection
</term>
. This paper
|
will
|
concentrate on the second requirement .
|
#20261
This paper will concentrate on the second requirement. |