D10-1055 developed by the NIST Machine Translation Evaluation Work - shop . In addition to
D09-1117 methods for carrying out machine translation evaluation . Methods differ in their focus
C04-1168 baseline IBM model 4 in all automatic translation evaluation met - rics . The largest was
C04-1072 2003 NIST Chinese-English machine translation evaluation . * Only one objective function
C96-2188 the industrial user in machine translation evaluation . The emphasis is laid on the
C02-1003 sentence-pairs that come from the machine translation evaluation corpus ( Duan 1996 ) . The average
C02-1057 examples . Another path of machine translation evaluation is based on test suites . Yu
C04-1072 2003 NIST Chinese-English machine translation evaluation . generating more human like
D08-1063 NIST 2003 Arabic-English machine translation evaluation test set . The Spanish to English
C04-1072 1989 ) . To apply LCS in machine translation evaluation , we view a translation as a
C02-1057 scorings . Introduction Machine translation evaluation has always been a key and open
C96-2188 annotated test set for Machine Translation evaluation by an Industrial User Eva DAUPHIN
C96-2188 annotated test set for Machine Translation evaluation by an Industrial User </title>
C04-1072 2003 NIST Chinese-English machine translation evaluation . According to Table 3 , we find
C04-1072 for evaluating automatic machine translation evaluation metrics automatically without
D08-1078 While the role of BLEU in machine translation evaluation is a much discussed topic , it
C96-2189 elaborates on the design of a machine translation evaluation method that aims to determine
C02-1057 system , an automatic machine translation evaluation method is implemented which adopts
D10-1064 pragmatic information . 3 . Robust translation evaluation : our approach is designed to
D09-1037 of the NIST Chinese - English translation evaluation , representing a realistic SMT
hide detail