D13-1194 each dataset separately using 5-fold cross-validation . We report precision ( P ) ,
D10-1116 threshold values for each PoS , 5-fold cross-validation was implemented to conduct the
D13-1180 models . We also conduct 5 times 5-fold cross-validation like the first experiment . In
D12-1115 classes . The first set consists of 5-fold cross-validation experiments on our training data
D13-1180 prompt-specific setting . We conduct 5-fold cross-validation , where the essays of each prompt
D13-1096 responses #posts We perform a 5-fold cross-validation on the 422 labeled posts , with
D13-1180 like the first experiment . In 5-fold cross-validation , essays associated with the
D13-1002 the same data set , again with 5-fold cross-validation . The results in Table 3 show
D12-1047 into five subsets and conduct 5-fold cross-validation experi - ments . In each trial
D14-1145 help . We report results for a 5-fold cross-validation . 5 Results Table 2 illustrates
D13-1189 Both models are evaluated using 5-fold cross-validation . More specifically , the single-domain
C02-1070 from the French annotations under 5-fold cross-validation . It is remarkable that the most
D14-1068 and RF models were tuned using 5-fold cross-validation results , with models selected
D14-1201 On each data set , we perform 5-fold cross-validation test and take the average as
D11-1147 documents . For each query we use 5-fold cross-validation , and predict the relevance of
D14-1214 borrowed from NELL12 . We use 5-fold cross-validation and report results in Table 7
D13-1154 feature sets . Results are from 5-fold cross-validation . The four feature sets are described
D09-1143 . For experimentation , we use 5-fold cross-validation with the Tree Kernel Tools (
D11-1132 entropy classifier rather than ing a 5-fold cross-validation . The scores reported SVM achieved
D14-1052 selected bags ) . Then , we perform 5-fold cross-validation for every aspect on each entire
hide detail