|
them , they need to be more sophisticated
|
at
|
responding to the
<term>
user
</term>
. The
|
#962
However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user. |
|
</term>
level . The use of
<term>
BLEU
</term>
|
at
|
the
<term>
character
</term>
level eliminates
|
#7750
The use of BLEUat the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs. |
|
suggests that
<term>
SMT models
</term>
are good
|
at
|
predicting the right
<term>
translation
</term>
|
#7881
At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences. |
|
<term>
MT system
</term>
. In our demonstration
|
at
|
<term>
ACL
</term>
, new
<term>
users
</term>
of
|
#9906
In our demonstration at ACL, new users of our tool will drive a syntax-based decoder for themselves. |
other,44-3-H92-1003,bq |
provides an overview of the research conducted
|
at
|
</term><term>
LIMSI
</term>
in the field of
|
#18626
The paper provides an overview of the research conducted at LIMSI in the field of speech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. |
|
problems of
<term>
SMT
</term>
. Our work aims
|
at
|
providing useful insights into the the
<term>
|
#9992
Our work aims at providing useful insights into the the computational complexity of those problems. |
|
InfoMagnets
</term>
.
<term>
InfoMagnets
</term>
aims
|
at
|
making
<term>
exploratory corpus analysis
|
#10878
InfoMagnets aims at making exploratory corpus analysis accessible to researchers who are not experts in text mining. |
|
experiment in a series of experiments , looks
|
at
|
the
<term>
intelligibility
</term>
of
<term>
|
#621
This, the first experiment in a series of experiments, looks at the intelligibility of MT output. |
|
information system
</term>
that has been developed
|
at
|
our laboratory . Experimental evaluation
|
#4399
Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. |
|
<term>
word n-grams
</term>
and its application
|
at
|
the
<term>
character
</term>
level . The use
|
#7741
This study establishes the equivalence between the standard use of BLEU in word n-grams and its application at the character level. |
|
, intelligent agent
</term>
for execution
|
at
|
the appropriate
<term>
database
</term>
.
<term>
|
#859
The request is passed to a mobile, intelligent agent for execution at the appropriate database. |
|
basis of
<term>
translation examples
</term>
|
at
|
appropriate pints in
<term>
MT. This paper
|
#18104
Currently some attempts are being made to use case-based reasoning in machine translation, that is, to make decisions on the basis of translation examplesat appropriate pints in MT. |
|
research group at The University of California
|
at
|
San Diego ,
<term>
verbs
</term>
are represented
|
#11860
In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. |
|
were asked to mark the
<term>
word
</term>
|
at
|
which they made this decision . The results
|
#748
Additionally, they were asked to mark the wordat which they made this decision. |
tool,14-1-H01-1041,bq |
CCLINC ( Common Coalition Language System
|
at
|
Lincoln Laboratory )
</term>
. The
<term>
CCLINC
|
#406
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
|
to right correcting minor errors . When
|
at
|
very noisy
<term>
portion
</term>
is detected
|
#18170
When at very noisy portion is detected, the parser skips that portion using a fake non-terminal symbol. |
|
.
<term>
Language understanding
</term>
work
|
at
|
<term>
Paramax
</term>
focuses on applying
|
#19640
Language understanding work at Paramax focuses on applying general-purpose language understanding technology to spoken language understanding, text understanding, and document processing, integrating language understanding with speech recognition, knowledge-based information retrieval and image understanding. |
|
format , developed by the LNR research group
|
at
|
The University of California at San Diego
|
#11855
In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. |
|
their
<term>
equivalence in meaning
</term>
:
|
at
|
least 96 % correct
<term>
paraphrases
</term>
|
#8505
We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets. |
|
<term>
words
</term>
or
<term>
phrases
</term>
|
at
|
any distance in the
<term>
corpus
</term>
.
|
#6395
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrasesat any distance in the corpus. |