other,26-4-N04-4028,bq |
</term>
which has performed well on
<term>
|
information
|
extraction tasks
</term>
because of its ability
|
#6837
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well oninformation extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
other,7-1-H05-1005,bq |
</term>
. In this paper , we use the
<term>
|
information
|
redundancy
</term>
in
<term>
multilingual input
|
#7132
In this paper, we use theinformation redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. |
|
</term>
is in
<term>
English
</term>
. Typically ,
|
information
|
that makes it to a
<term>
summary
</term>
appears
|
#7178
Typically, information that makes it to a summary appears in many different lexical-syntactic forms in the input documents. |
other,20-4-H05-1005,bq |
yielding different ways to realize that
<term>
|
information
|
</term>
in
<term>
English
</term>
. We demonstrate
|
#7216
Further, the use of multiple machine translation systems provides yet more redundancy, yielding different ways to realize thatinformation in English. |
|
involves manual determination of whether an
|
information
|
nugget appears in a system 's response
|
#7561
Until now, the only way to assess the correctness of answers to such questions involves manual determination of whether an information nugget appears in a system's response. |
other,12-3-I05-5003,bq |
</term>
which leverages
<term>
part of speech
|
information
|
</term>
of the
<term>
words
</term>
contributing
|
#8382
We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence. |
other,1-3-I05-6011,bq |
<term>
referents
</term>
. This
<term>
referential
|
information
|
</term>
is vital for resolving
<term>
zero
|
#8603
This referential information is vital for resolving zero pronouns and improving machine translation outputs. |
other,11-1-P05-1034,bq |
translation
</term>
that combines
<term>
syntactic
|
information
|
</term>
in the
<term>
source language
</term>
|
#9213
We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. |
other,30-4-P05-1074,bq |
it can be refined to take
<term>
contextual
|
information
|
</term>
into account . We evaluate our
<term>
|
#9747
We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. |
|
features
</term>
. Then , we explore whether
|
information
|
about
<term>
meeting context
</term>
can aid
|
#10283
Then, we explore whether information about meeting context can aid classifiers' performances. |
other,14-4-E06-1022,bq |
</term>
are combined with
<term>
speaker 's gaze
|
information
|
</term>
. The
<term>
classifiers
</term>
show
|
#10310
Both classifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
|
classifiers
</term>
show little
<term>
gain
</term>
from
|
information
|
about
<term>
meeting context
</term>
. Most
|
#10318
The classifiers show little gain from information about meeting context. |
other,17-1-N06-2009,bq |
to variations in the phrasing of an
<term>
|
information
|
need
</term>
. Finding the preferred
<term>
|
#10746
State-of-the-art Question Answering (QA) systems are very sensitive to variations in the phrasing of aninformation need. |
tech,6-1-N06-2038,bq |
are several approaches that model
<term>
|
information
|
extraction
</term>
as a
<term>
token classification
|
#10805
There are several approaches that modelinformation extraction as a token classification task, using various tagging strategies to combine multiple tokens. |
|
</term>
interacts with a system while gathering
|
information
|
related to a particular scenario . This
|
#11689
FERRET utilizes a novel approach to Q/A known as predictive questioning which attempts to identify the questions (and answers) that users need by analyzing how a user interacts with a system while gathering information related to a particular scenario. |
|
<term>
theory
</term>
specifies how different
|
information
|
in
<term>
memory
</term>
affects the certainty
|
#11953
Unlike logic, the theory specifies how different information in memory affects the certainty of the conclusions drawn. |
|
<term>
inference types
</term>
, and how the
|
information
|
found in
<term>
memory
</term>
determines which
|
#12039
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
<term>
recognition tasks
</term>
the role of
|
information
|
from the
<term>
discourse
</term>
and from
|
#14379
This processing description specifies in these recognition tasks the role of information from the discourse and from the participants' knowledge of the domain. |
|
yet have .
<term>
Semantic
</term>
and other
|
information
|
may still be incorporated , but as constraints
|
#15100
Semantic and other information may still be incorporated, but as constraints on the translation relation, not as levels of textual representation. |
|
</term>
puts different amounts and types of
|
information
|
into its
<term>
lexicon
</term>
according to
|
#15933
Although every natural language system needs a computational lexicon, each system puts different amounts and types of information into its lexicon according to its individual needs. |