|
. In this presentation , we describe the
|
features
|
of and
<term>
requirements
</term>
for a genuinely
|
#260
In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose. |
|
called a
<term>
semantic frame
</term>
. The key
|
features
|
of the
<term>
system
</term>
include : ( i
|
#441
The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments). |
other,7-2-P01-1070,bq |
which are built from
<term>
shallow linguistic
|
features
|
</term>
of
<term>
questions
</term>
, are employed
|
#2152
These models, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals. |
other,36-1-N03-1033,bq |
</term>
, ( ii ) broad use of
<term>
lexical
|
features
|
</term>
, including
<term>
jointly conditioning
|
#2946
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,12-3-P03-1002,bq |
based on : ( 1 ) an extended set of
<term>
|
features
|
</term>
; and ( 2 )
<term>
inductive decision
|
#3762
It is based on: (1) an extended set offeatures; and (2) inductive decision tree learning. |
other,5-3-P03-1022,bq |
non-NP-antecedents
</term>
. We present a set of
<term>
|
features
|
</term>
designed for
<term>
pronoun resolution
|
#4003
We present a set offeatures designed for pronoun resolution in spoken dialogue and determine the most promising features. |
other,13-3-C04-1035,bq |
create a set of
<term>
domain independent
|
features
|
</term>
to annotate an input
<term>
dataset
|
#5197
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
other,6-3-C04-1068,bq |
</term>
. In this paper , we identify
<term>
|
features
|
</term>
of
<term>
electronic discussions
</term>
|
#5431
In this paper, we identifyfeatures of electronic discussions that influence the clustering process, and offer a filtering mechanism that removes undesirable influences. |
other,11-4-C04-1116,bq |
most of the words with similar
<term>
context
|
features
|
</term>
in each author 's
<term>
corpus
</term>
|
#6170
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
other,4-3-C04-1128,bq |
summarization
</term>
. We show that various
<term>
|
features
|
</term>
based on the structure of
<term>
email-threads
|
#6286
We show that variousfeatures based on the structure of email-threads can be used to improve upon lexical similarity of discourse segments for question-answer pairing. |
other,3-3-N04-1024,bq |
essays
</term>
. This system identifies
<term>
|
features
|
</term>
of
<term>
sentences
</term>
based on
<term>
|
#6693
This system identifiesfeatures of sentences based on semantic similarity measures and discourse structure. |
other,38-4-N04-4028,bq |
to capture arbitrary , overlapping
<term>
|
features
|
</term>
of the input in a
<term>
Markov model
|
#6849
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlappingfeatures of the input in a Markov model. |
other,12-4-I05-5003,bq |
techniques
</term>
are able to produce useful
<term>
|
features
|
</term>
for
<term>
paraphrase classification
|
#8409
Our results show that MT evaluation techniques are able to produce usefulfeatures for paraphrase classification and to a lesser extent entailment. |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
other,15-3-P05-1069,bq |
model
</term>
which uses
<term>
real-valued
|
features
|
</term>
( e.g. a
<term>
language model score
|
#9601
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,11-5-E06-1018,bq |
<term>
sentence co-occurrences
</term>
as
<term>
|
features
|
</term>
allows for accurate results . Additionally
|
#10177
The combination with a two-step clustering process using sentence co-occurrences asfeatures allows for accurate results. |
other,21-2-E06-1022,bq |
utterance
</term>
and
<term>
conversational context
|
features
|
</term>
. Then , we explore whether information
|
#10276
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
other,5-5-E06-1035,bq |
</term>
. Examination of the effect of
<term>
|
features
|
</term>
shows that
<term>
predicting top-level
|
#10531
Examination of the effect offeatures shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
other,18-1-P06-2012,bq |
use of various
<term>
lexical and syntactic
|
features
|
</term>
from the
<term>
contexts
</term>
. It
|
#11337
This paper presents an unsupervised learning approach to disambiguate various relations between name entities by use of various lexical and syntactic features from the contexts. |
|
conversation transcripts
</term>
etc. , have
|
features
|
that differ significantly from
<term>
neat
|
#12995
However, a great deal of natural language texts e.g., memos, rough drafts, conversation transcripts etc., have features that differ significantly from neat texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic construction, missing periods, etc. |