P06-1002 describes work related to our alignment evaluation approach . Following this we
N09-1042 1 presents the results of the alignment evaluation . In every case , the best performance
P10-1032 Translation In addition to the intrinsic alignment evaluation , we further conduct the extrinsic
D15-1210 heuristic for symmetrization . For the alignment evaluation , we find that our approach achieves
P10-1032 parsed corpus for the sub-tree alignment evaluation . Experimental results show that
D10-1058 error rate ( AER ) as the word alignment evaluation criterion . Let A be the alignments
D10-1058 is the same as the above word alignment evaluation bitexts , with alignments for
D14-1017 > th ) freq ( w ) 4.2 Word alignment evaluation We measure the impact of our
D15-1144 Tsinghua Chinese-English word alignment evaluation data set ( Liu and Sun , 2015
P12-2055 word alignment quality . For word alignment evaluation , we calculated precision , recall
D15-1144 which contains 398.6 M words . For alignment evaluation , we used the Tsinghua Chinese-English
C00-1015 of the various target language alignment evaluation standards . Because of null links
D11-1046 task , it matters little what our alignment evaluation metric is . Why do grow-diag-final
P10-1032 corpora to carry out the sub-tree alignment evaluation . The first is HIT gold standard
D11-1046 the relationship between word alignment evaluation metrics and BLEU score remains
C00-1015 devised for strictly bilingual word alignment evaluation , and not t " o1 " the purpose
E14-2012 existing shared tasks on word alignment evaluation . Results obtained with THOT
E09-3010 matching strategy . The Ontology Alignment Evaluation Initia - tive21 ( OAEI ) offers
D15-1210 Tsinghua Chinese-English word alignment evaluation data set .1 The evaluation metric
D15-1289 on the datasets from Ontology Alignment Evaluation Initiative ( OAEI1 ) show that
hide detail