|
approach
</term>
alone can achieve competitive
|
results
|
, ( 2 ) for
<term>
predicting top-level boundaries
|
#10561
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
|
its impact on
<term>
word similarity
</term>
|
results
|
, proposing an objective
<term>
measure
</term>
|
#5344
Motivated by this semantic criterion we analyze the empirical quality of distributional word feature vectors and its impact on word similarityresults, proposing an objective measure for evaluating feature vector quality. |
|
negative feedback
</term>
. Based on these
|
results
|
, we present an
<term>
ECA
</term>
that uses
|
#5090
Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state. |
|
<term>
word-based models
</term>
. Our empirical
|
results
|
, which hold for all examined
<term>
language
|
#2590
Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
|
<term>
features
</term>
allows for accurate
|
results
|
. Additionally , a novel and likewise automatic
|
#10181
The combination with a two-step clustering process using sentence co-occurrences as features allows for accurate results. |
|
<term>
generation algorithm
</term>
based on the
|
results
|
. The evaluation using another 23 subjects
|
#5709
We conducted psychological experiments with 42 subjects to collect referring expressions in such situations, and built a generation algorithm based on the results. |
|
language tests
</term>
, achieving encouraging
|
results
|
. The paper presents a method for
<term>
|
#6434
We apply it in combination with a terabyte corpus to answer natural language tests, achieving encouraging results. |
|
described herein showed very encouraging
|
results
|
. The same system used in a
<term>
validation
|
#6514
The evaluation of the WSD system, implementing the method described herein showed very encouraging results. |
|
</term>
enable high quality
<term>
IE
</term>
|
results
|
. This paper proposes the
<term>
Hierarchical
|
#3787
The experimental results prove our claim that accurate predicate-argument structures enable high quality IEresults. |
|
<term>
WSD datasets
</term>
, with promising
|
results
|
. This paper presents a novel
<term>
ensemble
|
#7019
We evaluated the topic signatures on a WSD task, where we trained a second-order vector co-occurrence algorithm on standard WSD datasets, with promising results. |
|
size of the
<term>
databases
</term>
used some
|
results
|
about the effectiveness of these
<term>
indices
|
#195
Despite the small size of the databases used some results about the effectiveness of these indices can be obtained. |
|
candidates
</term>
for
<term>
understanding
</term>
|
results
|
and resolving the
<term>
ambiguity
</term>
|
#4198
By holding multiple candidates for understandingresults and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. |
other,2-4-C90-2032,bq |
<term>
Implementation
</term>
and
<term>
empirical
|
results
|
</term>
are described for the the analysis
|
#16339
Implementation and empirical results are described for the the analysis of dependency structures of Japanese patent claim sentences. |
|
from
<term>
documents
</term>
. Experimental
|
results
|
are encouraging . This paper describes
<term>
|
#11624
Experimental results are encouraging. |
|
beam-search decoder
</term>
. Experimental
|
results
|
are presented , that demonstrate how the
|
#7415
Experimental results are presented, that demonstrate how the proposed method allows to better generalize from the training data. |
|
<term>
corpora
</term>
and compare them with the
|
results
|
by the
<term>
NTHU 's statistic-based system
|
#18352
We will show the experimental results for two corpora and compare them with the results by the NTHU's statistic-based system, the only system that we know has attacked the same problem. |
|
</term>
. Finally , we have shown that these
|
results
|
can be improved using a bigger and a more
|
#11289
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |
|
<term>
quality
</term>
? We present empirical
|
results
|
casting doubt on this common , but unproved
|
#9335
We present empirical results casting doubt on this common, but unproved, assumption. |
|
synchronous ' way . Two
<term>
hardness
</term>
|
results
|
for the class
<term>
NP
</term>
are reported
|
#7478
Two hardnessresults for the class NP are reported, along with an exponential time lower-bound for certain classes of algorithms that are currently used in the literature. |
|
title
</term>
. We will show the experimental
|
results
|
for two
<term>
corpora
</term>
and compare
|
#18343
We will show the experimental results for two corpora and compare them with the results by the NTHU's statistic-based system, the only system that we know has attacked the same problem. |