C02-1007 not yet conducted a systematic quantitative evaluation . Conducting a systematic evaluation
C04-1150 follows , we firstly provide a quantitative evaluation of the OntoLearn ontology learning
D11-1105 relevant sentences . We conducted quantitative evaluation using standard test data sets
D11-1095 different . Finally , we supplemented quantitative evaluation with qualitative evaluation of
C04-1150 algorithms . We believe that a quantitative evaluation is particularly important in
D11-1061 redundant tweets . An extensive quantitative evaluation shows that our system successfully
D12-1065 terms . 6.2 News articles For quantitative evaluation , we use newswire data . Our
D10-1007 do any modeling with TAM in our quantitative evaluation . ) Lerman and McDonald introduce
D10-1006 only used these three aspects for quantitative evaluation . To avoid ambigu - ity , we
D10-1111 we gave various qualitative and quantitative evaluations of our model over the datasets
C04-1138 useful . We have not produced a quantitative evaluation of the miss rate of the clustering
D11-1105 method with four baseline methods . Quantitative evaluation based on Rouge metric demonstrates
A00-1011 generation module . We report quantitative evaluation results , analyze the results
C02-1007 relationship from corpora and perform a quantitative evaluation of our results . 2 Paradigmatic
C04-1150 OntoLearn This section provides a quantitative evaluation of OntoLearns main algorithms
C96-1071 foremost , in order to generate quantitative evaluation results we have used tile MUC-6
C04-1105 computationally feasible . 4.3 Quantitative evaluation using pseudo target words A method
A92-1022 fitted parses , and present a quantitative evaluation of the improvements obtained
D10-1011 performance . As far as we know , no quantitative evaluation of the increase or decrease of
D10-1115 claim specif - ically , but our quantitative evaluation in Section 6 shows that our approach
hide detail