tool,14-1-H01-1041,bq |
CCLINC ( Common Coalition Language System
|
at
|
Lincoln Laboratory )
</term>
. The
<term>
CCLINC
|
#406
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
|
experiment in a series of experiments , looks
|
at
|
the
<term>
intelligibility
</term>
of
<term>
|
#621
This, the first experiment in a series of experiments, looks at the intelligibility of MT output. |
|
were asked to mark the
<term>
word
</term>
|
at
|
which they made this decision . The results
|
#748
Additionally, they were asked to mark the wordat which they made this decision. |
|
, intelligent agent
</term>
for execution
|
at
|
the appropriate
<term>
database
</term>
.
<term>
|
#859
The request is passed to a mobile, intelligent agent for execution at the appropriate database. |
|
them , they need to be more sophisticated
|
at
|
responding to the
<term>
user
</term>
. The
|
#962
However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user. |
|
results from the
<term>
answering agents
</term>
|
at
|
the
<term>
question , passage , and/or answer
|
#2386
We present our multi-level answer resolution algorithm that combines results from the answering agentsat the question, passage, and/or answer levels. |
|
information system
</term>
that has been developed
|
at
|
our laboratory . Experimental evaluation
|
#4399
Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. |
|
independent of an
<term>
application
</term>
)
|
at
|
a good level of
<term>
human agreement
</term>
|
#5315
The new criterion - meaning-entailing substitutability - fits the needs of semantic-oriented NLP applications and can be evaluated directly (independent of an application) at a good level of human agreement. |
|
<term>
words
</term>
or
<term>
phrases
</term>
|
at
|
any distance in the
<term>
corpus
</term>
.
|
#6395
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrasesat any distance in the corpus. |
|
<term>
word n-grams
</term>
and its application
|
at
|
the
<term>
character
</term>
level . The use
|
#7741
This study establishes the equivalence between the standard use of BLEU in word n-grams and its application at the character level. |
|
</term>
level . The use of
<term>
BLEU
</term>
|
at
|
the
<term>
character
</term>
level eliminates
|
#7750
The use of BLEUat the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs. |
|
suggests that
<term>
SMT models
</term>
are good
|
at
|
predicting the right
<term>
translation
</term>
|
#7881
At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences. |
|
( i ) their
<term>
grammaticality
</term>
:
|
at
|
least 99 % correct
<term>
sentences
</term>
|
#8490
We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets. |
|
their
<term>
equivalence in meaning
</term>
:
|
at
|
least 96 % correct
<term>
paraphrases
</term>
|
#8505
We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets. |
|
statistical machine translation ( MT )
</term>
aims
|
at
|
applying
<term>
statistical models
</term>
|
#9416
Syntax-based statistical machine translation (MT) aims at applying statistical models to structured data. |
|
<term>
MT system
</term>
. In our demonstration
|
at
|
<term>
ACL
</term>
, new
<term>
users
</term>
of
|
#9906
In our demonstration at ACL, new users of our tool will drive a syntax-based decoder for themselves. |
|
problems of
<term>
SMT
</term>
. Our work aims
|
at
|
providing useful insights into the the
<term>
|
#9992
Our work aims at providing useful insights into the the computational complexity of those problems. |
|
InfoMagnets
</term>
.
<term>
InfoMagnets
</term>
aims
|
at
|
making
<term>
exploratory corpus analysis
|
#10878
InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining. |
|
format , developed by the LNR research group
|
at
|
The University of California at San Diego
|
#11855
In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. |
|
research group at The University of California
|
at
|
San Diego ,
<term>
verbs
</term>
are represented
|
#11860
In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. |