|
. In this presentation , we describe the
|
features
|
of and
<term>
requirements
</term>
for a genuinely
|
#260
In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose. |
|
called a
<term>
semantic frame
</term>
. The key
|
features
|
of the
<term>
system
</term>
include : ( i
|
#441
The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments). |
other,7-2-P01-1070,bq |
which are built from
<term>
shallow linguistic
|
features
|
</term>
of
<term>
questions
</term>
, are employed
|
#2152
These models, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals. |
other,36-1-N03-1033,bq |
</term>
, ( ii ) broad use of
<term>
lexical
|
features
|
</term>
, including
<term>
jointly conditioning
|
#2946
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,66-1-N03-1033,bq |
fine-grained modeling of
<term>
unknown word
|
features
|
</term>
. Using these ideas together , the
|
#2977
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,12-3-P03-1002,bq |
based on : ( 1 ) an extended set of
<term>
|
features
|
</term>
; and ( 2 )
<term>
inductive decision
|
#3762
It is based on: (1) an extended set offeatures; and (2) inductive decision tree learning. |
other,5-3-P03-1022,bq |
non-NP-antecedents
</term>
. We present a set of
<term>
|
features
|
</term>
designed for
<term>
pronoun resolution
|
#4003
We present a set offeatures designed for pronoun resolution in spoken dialogue and determine the most promising features. |
other,18-3-P03-1022,bq |
</term>
and determine the most promising
<term>
|
features
|
</term>
. We evaluate the
<term>
system
</term>
|
#4016
We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promisingfeatures. |
other,13-3-C04-1035,bq |
create a set of
<term>
domain independent
|
features
|
</term>
to annotate an input
<term>
dataset
|
#5197
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
other,5-5-C04-1035,bq |
approx 90 % . The results show that the
<term>
|
features
|
</term>
in terms of which we formulate our
|
#5245
The results show that thefeatures in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
other,35-5-C04-1035,bq |
be learnt automatically from these
<term>
|
features
|
</term>
. We suggest a new goal and evaluation
|
#5275
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from thesefeatures. |
other,6-3-C04-1068,bq |
</term>
. In this paper , we identify
<term>
|
features
|
</term>
of
<term>
electronic discussions
</term>
|
#5431
In this paper, we identifyfeatures of electronic discussions that influence the clustering process, and offer a filtering mechanism that removes undesirable influences. |
other,11-4-C04-1116,bq |
most of the words with similar
<term>
context
|
features
|
</term>
in each author 's
<term>
corpus
</term>
|
#6170
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
other,4-3-C04-1128,bq |
summarization
</term>
. We show that various
<term>
|
features
|
</term>
based on the structure of
<term>
email-threads
|
#6286
We show that variousfeatures based on the structure of email-threads can be used to improve upon lexical similarity of discourse segments for question-answer pairing. |
other,3-3-N04-1024,bq |
essays
</term>
. This system identifies
<term>
|
features
|
</term>
of
<term>
sentences
</term>
based on
<term>
|
#6693
This system identifiesfeatures of sentences based on semantic similarity measures and discourse structure. |
other,6-4-N04-1024,bq |
support vector machine
</term>
uses these
<term>
|
features
|
</term>
to capture
<term>
breakdowns in coherence
|
#6711
A support vector machine uses thesefeatures to capture breakdowns in coherence due to relatedness to the essay question and relatedness between discourse elements. |
other,38-4-N04-4028,bq |
to capture arbitrary , overlapping
<term>
|
features
|
</term>
of the input in a
<term>
Markov model
|
#6849
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlappingfeatures of the input in a Markov model. |
other,12-4-I05-5003,bq |
techniques
</term>
are able to produce useful
<term>
|
features
|
</term>
for
<term>
paraphrase classification
|
#8409
Our results show that MT evaluation techniques are able to produce usefulfeatures for paraphrase classification and to a lesser extent entailment. |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |