other,19-6-E06-1035,bq |
<term>
lexical-cohesion and conversational
|
features
|
</term>
, but do not change the general preference
|
#10630
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
other,36-1-N03-1033,bq |
</term>
, ( ii ) broad use of
<term>
lexical
|
features
|
</term>
, including
<term>
jointly conditioning
|
#2946
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,12-3-P03-1002,bq |
based on : ( 1 ) an extended set of
<term>
|
features
|
</term>
; and ( 2 )
<term>
inductive decision
|
#3762
It is based on: (1) an extended set offeatures; and (2) inductive decision tree learning. |
|
identities themselves , e.g. block bigram
|
features
|
. Our
<term>
training algorithm
</term>
can
|
#9624
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,8-4-P05-1069,bq |
</term>
can easily handle millions of
<term>
|
features
|
</term>
. The best system obtains a 18.6
|
#9634
Our training algorithm can easily handle millions offeatures. |
other,21-2-E06-1022,bq |
utterance
</term>
and
<term>
conversational context
|
features
|
</term>
. Then , we explore whether information
|
#10276
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
other,66-1-N03-1033,bq |
fine-grained modeling of
<term>
unknown word
|
features
|
</term>
. Using these ideas together , the
|
#2977
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,18-3-P03-1022,bq |
</term>
and determine the most promising
<term>
|
features
|
</term>
. We evaluate the
<term>
system
</term>
|
#4016
We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promisingfeatures. |
other,35-5-C04-1035,bq |
be learnt automatically from these
<term>
|
features
|
</term>
. We suggest a new goal and evaluation
|
#5275
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from thesefeatures. |
other,15-3-P05-1069,bq |
model
</term>
which uses
<term>
real-valued
|
features
|
</term>
( e.g. a
<term>
language model score
|
#9601
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,11-5-E06-1018,bq |
<term>
sentence co-occurrences
</term>
as
<term>
|
features
|
</term>
allows for accurate results . Additionally
|
#10177
The combination with a two-step clustering process using sentence co-occurrences asfeatures allows for accurate results. |
other,9-4-E06-1022,bq |
conversational context
</term>
and
<term>
utterance
|
features
|
</term>
are combined with
<term>
speaker 's
|
#10303
Both classifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
other,4-3-C04-1128,bq |
summarization
</term>
. We show that various
<term>
|
features
|
</term>
based on the structure of
<term>
email-threads
|
#6286
We show that variousfeatures based on the structure of email-threads can be used to improve upon lexical similarity of discourse segments for question-answer pairing. |
other,27-3-P05-1069,bq |
model score
</term>
) as well as
<term>
binary
|
features
|
</term>
based on the
<term>
block
</term>
identities
|
#9613
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,5-3-P03-1022,bq |
non-NP-antecedents
</term>
. We present a set of
<term>
|
features
|
</term>
designed for
<term>
pronoun resolution
|
#4003
We present a set offeatures designed for pronoun resolution in spoken dialogue and determine the most promising features. |
other,12-4-I05-5003,bq |
techniques
</term>
are able to produce useful
<term>
|
features
|
</term>
for
<term>
paraphrase classification
|
#8409
Our results show that MT evaluation techniques are able to produce usefulfeatures for paraphrase classification and to a lesser extent entailment. |
other,18-1-P06-2012,bq |
use of various
<term>
lexical and syntactic
|
features
|
</term>
from the
<term>
contexts
</term>
. It
|
#11337
This paper presents an unsupervised learning approach to disambiguate various relations between name entities by use of various lexical and syntactic features from the contexts. |
other,11-4-C04-1116,bq |
most of the words with similar
<term>
context
|
features
|
</term>
in each author 's
<term>
corpus
</term>
|
#6170
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
other,5-5-C04-1035,bq |
approx 90 % . The results show that the
<term>
|
features
|
</term>
in terms of which we formulate our
|
#5245
The results show that thefeatures in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |