|
text browser
</term>
. We describe how this
|
information
|
is used in a
<term>
prototype system
</term>
|
#317
We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function. |
other,30-4-P05-1074,bq |
it can be refined to take
<term>
contextual
|
information
|
</term>
into account . We evaluate our
<term>
|
#9747
We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. |
tech,13-2-P03-1030,bq |
<term>
new event detection
</term>
as
<term>
|
information
|
retrieval task
</term>
and hypothesize on
|
#4076
In this paper we formulate story link detection and new event detection asinformation retrieval task and hypothesize on the impact of precision and recall on both systems. |
tech,19-1-P03-1068,bq |
large-scale
<term>
acquisition of word-semantic
|
information
|
</term>
, e.g. the construction of
<term>
domain-independent
|
#4954
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. |
|
involves manual determination of whether an
|
information
|
nugget appears in a system 's response
|
#7561
Until now, the only way to assess the correctness of answers to such questions involves manual determination of whether an information nugget appears in a system's response. |
|
<term>
theory
</term>
specifies how different
|
information
|
in
<term>
memory
</term>
affects the certainty
|
#11953
Unlike logic, the theory specifies how different information in memory affects the certainty of the conclusions drawn. |
other,27-1-C04-1112,bq |
maximum entropy )
</term>
with
<term>
linguistic
|
information
|
</term>
. Instead of building individual
<term>
|
#6007
In this paper, we present a corpus-based supervised word sense disambiguation (WSD) system for Dutch which combines statistical classification (maximum entropy) with linguistic information. |
|
<term>
inference types
</term>
, and how the
|
information
|
found in
<term>
memory
</term>
determines which
|
#12039
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
other,1-3-I05-6011,bq |
<term>
referents
</term>
. This
<term>
referential
|
information
|
</term>
is vital for resolving
<term>
zero
|
#8603
This referential information is vital for resolving zero pronouns and improving machine translation outputs. |
other,12-5-A94-1007,bq |
</term>
, which provides
<term>
top-down scope
|
information
|
</term>
of the correct
<term>
syntactic structure
|
#19798
This paper presents an English coordinate structure analysis model, which provides top-down scope information of the correct syntactic structure by taking advantage of the symmetric patterns of the parallelism. |
tech,6-1-N06-2038,bq |
are several approaches that model
<term>
|
information
|
extraction
</term>
as a
<term>
token classification
|
#10805
There are several approaches that modelinformation extraction as a token classification task, using various tagging strategies to combine multiple tokens. |
|
</term>
puts different amounts and types of
|
information
|
into its
<term>
lexicon
</term>
according to
|
#15933
Although every natural language system needs a computational lexicon, each system puts different amounts and types of information into its lexicon according to its individual needs. |
tech,1-4-N04-4028,bq |
each
<term>
extracted field
</term>
. The
<term>
|
information
|
extraction system
</term>
we evaluate is
|
#6812
Theinformation extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
tech,1-4-H01-1001,bq |
large database
</term>
. Traditional
<term>
|
information
|
retrieval techniques
</term>
use a
<term>
histogram
|
#56
Traditionalinformation retrieval techniques use a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. |
other,20-1-H92-1026,bq |
takes advantage of detailed
<term>
linguistic
|
information
|
</term>
to resolve
<term>
ambiguity
</term>
.
|
#18914
We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity. |
tech,31-1-H92-1095,bq |
recognition
</term>
,
<term>
knowledge-based
|
information
|
retrieval
</term>
and
<term>
image understanding
|
#19669
Language understanding work at Paramax focuses on applying general-purpose language understanding technology to spoken language understanding, text understanding, and document processing, integrating language understanding with speech recognition, knowledge-based information retrieval and image understanding. |
other,12-3-I05-5003,bq |
</term>
which leverages
<term>
part of speech
|
information
|
</term>
of the
<term>
words
</term>
contributing
|
#8382
We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence. |
other,12-3-N04-1022,bq |
incorporate different levels of
<term>
linguistic
|
information
|
</term>
from
<term>
word strings
</term>
,
<term>
|
#6588
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |
other,17-1-N06-2009,bq |
to variations in the phrasing of an
<term>
|
information
|
need
</term>
. Finding the preferred
<term>
|
#10746
State-of-the-art Question Answering (QA) systems are very sensitive to variations in the phrasing of aninformation need. |
|
individual needs . However , some of the
|
information
|
needed across
<term>
systems
</term>
is shared
|
#15948
However, some of the information needed across systems is shared or identical information. |