W09-3109 scores are calculated with the NIST scoring tool with respect to four reference
W03-2802 percentage in Table 1 ) . With our scoring tool the definition of new operators
W09-2416 achieves over all test examples . Scoring Tool . We will provide an automatic
W07-0712 scores , as computed with the NIST scoring tool . For time reasons , the decoder
P06-1075 one-sentence queries . When use NIST scoring tool to evaluate the translation quality
W03-2802 the system capabilities . The scoring tool for AVR comparison is able to
W03-2801 corpus ; an evaluation metric and a scoring tool , implementing this metric .
P06-1075 sentences are required . NIST scoring tool supports multi references . In
N04-1018 predicted and then evaluated by the scoring tools developed for the NIST Rich Transcript
W03-2802 . 3.1.5 Evaluation metrics and scoring tool Common evaluation metrics are
N04-4018 numbers computed with the su-eval scoring tool from NIST . SU error rates for
P14-1065 . 7We have fixed a bug in the scoring tool from WMT12 , which was making
D10-1002 ( Roark et al. , 2006 ) . This scoring tool pro duces scores that are identical
W03-1505 experiment , and the automated scoring tools in GATE were used to evaluate
N04-1018 Strassel , 2003 ) and standard scoring tools ( NIST , 2003 ) . 2.1 Sentence
W03-2802 accuracy and the ability of the scoring tool to point out the errors allowed
W05-1101 review files and inputs to bracket scoring tools . The results of some limited
S12-1064 3.3 Meteor features The Meteor scoring tool ( Denkowski and Lavie , 2011
W03-1906 benefited by the existence of scoring tools , which could automatically compare
M91-1014 The MUC-3 corpus and automatic scoring tool made it possible for us to do
hide detail