#1103The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
string comparison methods
</term>
, and run
each
over both character - and word-segmented
#1504We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams).
</term>
created by the
<term>
system
</term>
during
each
<term>
question answering session
</term>
.
#3707The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session.
single
<term>
understanding result
</term>
after
each
<term>
user utterance
</term>
. By holding
#4190Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understanding result after each user utterance.
generate
<term>
cooperative responses
</term>
to
each
user in
<term>
spoken dialogue systems
</term>
#4293We address appropriate user modeling in order to generate cooperative responses to each user in spoken dialogue systems.
questions
</term>
central to understanding
each
story in our
<term>
corpus
</term>
. Because
#5777Annotators generated a list of questions central to understanding each story in our corpus.
</term>
providing an
<term>
answer
</term>
to
each
<term>
question
</term>
. To address the
<term>
#5814Judges found sentences providing an answer to each question.
which compares the
<term>
similarity
</term>
of
each
<term>
sentence
</term>
to the
<term>
input question
#5875Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap.
changes in the number of papers published for
each
research organization and on each research
#6568We determined the changes in the number of papers published for each research organization and on each research area as well as the relationship between research organizations and research areas.
published for each research organization and on
each
research area as well as the relationship
#6573We determined the changes in the number of papers published for each research organization and on each research area as well as the relationship between research organizations and research areas.
tracks . Analysis of the results shows that
each
component of the
<term>
system
</term>
contributed
#7045Analysis of the results shows that each component of the system contributed to the scores.
sentential paraphrases
</term>
are collected for
each
<term>
paraphrase class
</term>
separately
#7518Towards deep analysis of compositional classes of paraphrases, we have examined a class-oriented framework for collecting paraphrase examples, in which sentential paraphrases are collected for each paraphrase class separately by means of automatic candidate generation and manual judgement.
</term>
and that the
<term>
likelihood
</term>
of
each
<term>
variable
</term>
can be inferred . A
#7727We assume that the context of a sentence is indicated by a latent variable of the model as a topic and that the likelihood of each variable can be inferred.
a set of
<term>
candidate parses
</term>
for
each
<term>
input sentence
</term>
, with associated
#8038The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
not they are
<term>
translations
</term>
of
each
other . Using this approach , we extract
#8386We train a maximum entropy classifier that, given a pair of sentences, can reliably determinewhether or not they are translations of each other.
classifiers
</term>
. This probably occurs because
each
<term>
model
</term>
has different strengths
#9583This probably occurs because each model has different strengths and weaknesses for modeling the knowledge sources.
order to take advantage of the strengths of
each
. Applications of
<term>
path-based inference
#13120A method is described of combining the two styles in a single system in order to take advantage of the strengths of each.
in lateral or longitudinal directions .
Each
<term>
character
</term>
has its own width
#13229Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. Each character has its own width and a line length is counted by the sum of each character.
line length
</term>
is counted by the sum of
each
<term>
character
</term>
. By using commands
#13245Each character has its own width and a line length is counted by the sum of each character.
<term>
generalized metaphor mappings
</term>
.
Each
<term>
generalized metaphor
</term>
contains
#13413This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component.