other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,45-4-J05-1003,bq |
generative model
</term>
which takes these
<term>
|
features
|
</term>
into account . We introduce a new
|
#8755
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account. |
other,23-7-J05-1003,bq |
evidence from an additional 500,000
<term>
|
features
|
</term>
over
<term>
parse trees
</term>
that
|
#8822
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model. |
other,15-3-P05-1069,bq |
model
</term>
which uses
<term>
real-valued
|
features
|
</term>
( e.g. a
<term>
language model score
|
#9601
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,27-3-P05-1069,bq |
model score
</term>
) as well as
<term>
binary
|
features
|
</term>
based on the
<term>
block
</term>
identities
|
#9613
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
|
identities themselves , e.g. block bigram
|
features
|
. Our
<term>
training algorithm
</term>
can
|
#9624
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,8-4-P05-1069,bq |
</term>
can easily handle millions of
<term>
|
features
|
</term>
. The best system obtains a 18.6
|
#9634
Our training algorithm can easily handle millions offeatures. |
other,11-5-E06-1018,bq |
<term>
sentence co-occurrences
</term>
as
<term>
|
features
|
</term>
allows for accurate results . Additionally
|
#10177
The combination with a two-step clustering process using sentence co-occurrences asfeatures allows for accurate results. |
other,21-2-E06-1022,bq |
utterance
</term>
and
<term>
conversational context
|
features
|
</term>
. Then , we explore whether information
|
#10276
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
other,9-4-E06-1022,bq |
conversational context
</term>
and
<term>
utterance
|
features
|
</term>
are combined with
<term>
speaker 's
|
#10303
Both classifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
other,5-5-E06-1035,bq |
</term>
. Examination of the effect of
<term>
|
features
|
</term>
shows that
<term>
predicting top-level
|
#10531
Examination of the effect offeatures shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
other,51-5-E06-1035,bq |
<term>
lexical-cohesion and conversational
|
features
|
</term>
performs best , and ( 3 )
<term>
conversational
|
#10580
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
other,19-6-E06-1035,bq |
<term>
lexical-cohesion and conversational
|
features
|
</term>
, but do not change the general preference
|
#10630
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
other,18-1-P06-2012,bq |
use of various
<term>
lexical and syntactic
|
features
|
</term>
from the
<term>
contexts
</term>
. It
|
#11337
This paper presents an unsupervised learning approach to disambiguate various relations between name entities by use of various lexical and syntactic features from the contexts. |
|
conversation transcripts
</term>
etc. , have
|
features
|
that differ significantly from
<term>
neat
|
#12995
However, a great deal of natural language texts e.g., memos, rough drafts, conversation transcripts etc., have features that differ significantly from neat texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic construction, missing periods, etc. |
other,8-1-P86-1038,bq |
</term>
use structures containing sets of
<term>
|
features
|
</term>
to describe
<term>
linguistic objects
|
#14631
Unification-based grammar formalisms use structures containing sets offeatures to describe linguistic objects. |
|
</term>
to fit . One of the distinguishing
|
features
|
of a more
<term>
linguistically sophisticated
|
#20006
One of the distinguishing features of a more linguistically sophisticated representation of documents over a word set based representation of them is that linguistically sophisticated units are more frequently individually good predictors of document descriptors (keywords) than single words are. |