tech,19-1-P03-1068,bq |
large-scale
<term>
acquisition of word-semantic
|
information
|
</term>
, e.g. the construction of
<term>
domain-independent
|
#4954
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. |
other,27-1-C04-1112,bq |
maximum entropy )
</term>
with
<term>
linguistic
|
information
|
</term>
. Instead of building individual
<term>
|
#6007
In this paper, we present a corpus-based supervised word sense disambiguation (WSD) system for Dutch which combines statistical classification (maximum entropy) with linguistic information. |
other,14-4-E06-1022,bq |
</term>
are combined with
<term>
speaker 's gaze
|
information
|
</term>
. The
<term>
classifiers
</term>
show
|
#10310
Both classifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
|
<term>
systems
</term>
is shared or identical
|
information
|
. This paper presents our experience in
|
#15956
However, some of the information needed across systems is shared or identical information. |
|
evaluation techniques
</term>
will provide
|
information
|
about both the
<term>
human language learning
|
#589
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. |
|
features
</term>
. Then , we explore whether
|
information
|
about
<term>
meeting context
</term>
can aid
|
#10283
Then, we explore whether information about meeting context can aid classifiers' performances. |
|
classifiers
</term>
show little
<term>
gain
</term>
from
|
information
|
about
<term>
meeting context
</term>
. Most
|
#10318
The classifiers show little gain from information about meeting context. |
|
plausible interpretation from a chunk of
|
information
|
accumulated as the constraints . The interpretation
|
#18508
This makes it possible to express the vagueness of the spatial concepts and to derive the maximally plausible interpretation from a chunk of information accumulated as the constraints. |
tech,10-1-H01-1040,bq |
show how two standard outputs from
<term>
|
information
|
extraction ( IE ) systems
</term>
-
<term>
|
#284
In this paper we show how two standard outputs frominformation extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
tech,6-1-N06-2038,bq |
are several approaches that model
<term>
|
information
|
extraction
</term>
as a
<term>
token classification
|
#10805
There are several approaches that modelinformation extraction as a token classification task, using various tagging strategies to combine multiple tokens. |
tech,1-4-N04-4028,bq |
each
<term>
extracted field
</term>
. The
<term>
|
information
|
extraction system
</term>
we evaluate is
|
#6812
Theinformation extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
other,26-4-N04-4028,bq |
</term>
which has performed well on
<term>
|
information
|
extraction tasks
</term>
because of its ability
|
#6837
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well oninformation extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
other,20-3-C88-2166,bq |
to be a repository of
<term>
shared lexical
|
information
|
</term>
for use by
<term>
Natural Language
|
#15980
This paper presents our experience in planning and building COMPLEX, a computational lexicon designed to be a repository of shared lexical information for use by Natural Language Processing (NLP) systems. |
|
<term>
inference types
</term>
, and how the
|
information
|
found in
<term>
memory
</term>
determines which
|
#12039
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
write a
<term>
topical report
</term>
, culling
|
information
|
from a large inflow of
<term>
multilingual
|
#3593
The TAP-XL Automated Analyst's Assistant is an application designed to help an English-speaking analyst write a topical report, culling information from a large inflow of multilingual, multimedia data. |
|
drawn primarily on explicit and implicit
|
information
|
from
<term>
machine-readable dictionaries
|
#16000
We have drawn primarily on explicit and implicit information from machine-readable dictionaries (MRD's) to create a broad coverage lexicon. |
|
<term>
recognition tasks
</term>
the role of
|
information
|
from the
<term>
discourse
</term>
and from
|
#14379
This processing description specifies in these recognition tasks the role of information from the discourse and from the participants' knowledge of the domain. |
other,2-2-H92-1026,bq |
, syntactic , semantic , and structural
|
information
|
</term>
from the
<term>
parse tree
</term>
into
|
#18929
HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. |
|
tailored to the problem of extracting specific
|
information
|
from
<term>
unrestricted texts
</term>
where
|
#17565
We present an efficient algorithm for chart-based phrase structure parsing of natural language that is tailored to the problem of extracting specific information from unrestricted texts where many of the words are unknown and much of the text is irrelevant to the task. |
other,12-3-N04-1022,bq |
incorporate different levels of
<term>
linguistic
|
information
|
</term>
from
<term>
word strings
</term>
,
<term>
|
#6588
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |