#260In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose.
called a
<term>
semantic frame
</term>
. The key
features
of the
<term>
system
</term>
include : ( i
#441The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments).
other,7-2-P01-1070,ak
which are built from
<term>
shallow linguistic
features
</term>
of questions , are employed to predict
#2153These models, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals.
other,36-1-N03-1033,ak
</term>
, ( ii ) broad use of
<term>
lexical
features
</term>
, including jointly conditioning
#2947We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,66-1-N03-1033,ak
fine-grained modeling of
<term>
unknown word
features
</term>
. Using these ideas together , the
#2978We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,12-3-P03-1002,ak
based on : ( 1 ) an extended set of
<term>
features
</term>
; and ( 2 )
<term>
inductive decision
#3763It is based on: (1) an extended set offeatures; and (2) inductive decision tree learning.
other,5-3-P03-1022,ak
non-NP-antecedents
</term>
. We present a set of
<term>
features
</term>
designed for
<term>
pronoun resolution
#4004We present a set offeatures designed for pronoun resolution in spoken dialogue and determine the most promising features.
other,18-3-P03-1022,ak
</term>
and determine the most promising
<term>
features
</term>
. We evaluate the system on twenty
#4017We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promisingfeatures.
techniques
</term>
are able to produce useful
features
for
<term>
paraphrase classification
</term>
#7459Our results show that MT evaluation techniques are able to produce useful features for paraphrase classification and to a lesser extent entailment.
other,14-3-J05-1003,ak
<term>
ranking
</term>
, using additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence
#8068A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence.
other,19-4-J05-1003,ak
represented as an arbitrary set of
<term>
features
</term>
, without concerns about how these
#8094The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
other,26-4-J05-1003,ak
, without concerns about how these
<term>
features
</term>
interact or overlap and without the
#8101The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
other,45-4-J05-1003,ak
generative model
</term>
which takes these
<term>
features
</term>
into account . We introduce a new
#8120The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account.
other,23-7-J05-1003,ak
evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that
#8187The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model.
other,19-9-J05-1003,ak
of the
<term>
sparsity
</term>
of the
<term>
feature
space
</term>
in the
<term>
parsing data
</term>
#8245The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of thefeature space in the parsing data.
tech,21-11-J05-1003,ak
simplicity and efficiency — to work on
<term>
feature
selection methods
</term>
within
<term>
log-linear
#8291We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models.
tech,40-5-P05-1010,ak
</term>
created using extensive
<term>
manual
feature
selection
</term>
. This paper considers
#8589In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
other,11-6-P05-1053,ak
effective incorporation of diverse
<term>
features
</term>
enables our system outperform previously
#9390Evaluation on the ACE corpus shows that effective incorporation of diversefeatures enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types.
tech,6-2-P05-1057,ak
knowledge sources
</term>
are treated as
<term>
feature
functions
</term>
, which depend on the
<term>
#9614All knowledge sources are treated asfeature functions, which depend on the source language sentence, the target language sentence and possible additional variables.
other,20-4-P05-1057,ak
bilingual dictionary coverage
</term>
as
<term>
features
</term>
. Our experiments show that
<term>
#9669In this paper, we use IBM Model 3 alignment properties, POS correspondence, and bilingual dictionary coverage asfeatures.