|
. In this presentation , we describe the
|
features
|
of and
<term>
requirements
</term>
for a genuinely
|
#260
In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose. |
|
called a
<term>
semantic frame
</term>
. The key
|
features
|
of the
<term>
system
</term>
include : ( i
|
#441
The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments). |
|
identities themselves , e.g. block bigram
|
features
|
. Our
<term>
training algorithm
</term>
can
|
#9624
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
|
conversation transcripts
</term>
etc. , have
|
features
|
that differ significantly from
<term>
neat
|
#12995
However, a great deal of natural language texts e.g., memos, rough drafts, conversation transcripts etc., have features that differ significantly from neat texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic construction, missing periods, etc. |
|
</term>
to fit . One of the distinguishing
|
features
|
of a more
<term>
linguistically sophisticated
|
#20006
One of the distinguishing features of a more linguistically sophisticated representation of documents over a word set based representation of them is that linguistically sophisticated units are more frequently individually good predictors of document descriptors (keywords) than single words are. |
other,11-4-C04-1116,bq |
most of the words with similar
<term>
context
|
features
|
</term>
in each author 's
<term>
corpus
</term>
|
#6170
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
other,11-5-E06-1018,bq |
<term>
sentence co-occurrences
</term>
as
<term>
|
features
|
</term>
allows for accurate results . Additionally
|
#10177
The combination with a two-step clustering process using sentence co-occurrences asfeatures allows for accurate results. |
other,12-3-P03-1002,bq |
based on : ( 1 ) an extended set of
<term>
|
features
|
</term>
; and ( 2 )
<term>
inductive decision
|
#3762
It is based on: (1) an extended set offeatures; and (2) inductive decision tree learning. |
other,12-4-I05-5003,bq |
techniques
</term>
are able to produce useful
<term>
|
features
|
</term>
for
<term>
paraphrase classification
|
#8409
Our results show that MT evaluation techniques are able to produce usefulfeatures for paraphrase classification and to a lesser extent entailment. |
other,13-3-C04-1035,bq |
create a set of
<term>
domain independent
|
features
|
</term>
to annotate an input
<term>
dataset
|
#5197
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
other,15-3-P05-1069,bq |
model
</term>
which uses
<term>
real-valued
|
features
|
</term>
( e.g. a
<term>
language model score
|
#9601
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,18-1-P06-2012,bq |
use of various
<term>
lexical and syntactic
|
features
|
</term>
from the
<term>
contexts
</term>
. It
|
#11337
This paper presents an unsupervised learning approach to disambiguate various relations between name entities by use of various lexical and syntactic features from the contexts. |
other,18-3-P03-1022,bq |
</term>
and determine the most promising
<term>
|
features
|
</term>
. We evaluate the
<term>
system
</term>
|
#4016
We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promisingfeatures. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,19-6-E06-1035,bq |
<term>
lexical-cohesion and conversational
|
features
|
</term>
, but do not change the general preference
|
#10630
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
other,21-2-E06-1022,bq |
utterance
</term>
and
<term>
conversational context
|
features
|
</term>
. Then , we explore whether information
|
#10276
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
other,23-7-J05-1003,bq |
evidence from an additional 500,000
<term>
|
features
|
</term>
over
<term>
parse trees
</term>
that
|
#8822
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model. |
other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,27-3-P05-1069,bq |
model score
</term>
) as well as
<term>
binary
|
features
|
</term>
based on the
<term>
block
</term>
identities
|
#9613
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |