orange: a method for evaluating automatic evaluation metrics for machine translation comparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as pearson’s product moment correlation coefficient or spearman’s rank order correlation coefficient between human scores and automatic scores. however, such comparisons rely on human judgments of translation qualities such as adequacy and fluency. unfortunately, these judgments are often inconsistent and very expensive to acquire. in this paper, we introduce a new evaluation method, orange, for evaluating automatic machine translation evaluation metrics automatically without extra human involvement other than using a set of reference translations. we also show the results of comparing several existing automatic metrics and three new automatic metrics using orange. bleu is smoothed (lin and och, 2004b), and it considers only matching up to bi grams because this has higher correlations with human judgments than when higher-ordered n-grams are included. smoothed per-sentence bleu was used as a similarity metric.