D11-1148 specific purpose of augmenting the Charniak parser . However , much subsequent work
C04-1021 this paper . We found that the Charniak parser , which was trained on the WSJ
C04-1021 Parser Enhancements We used the Charniak parser ( Charniak , 2000 ) for the experiments
C04-1021 for re-training . Whereas the Charniak parser was originally trained on close
D10-1031 trees and the output from the Charniak parser , and there is not a big performance
D09-1161 This is because that the enhanced Charniak parser provides more accurate model
D08-1093 the more accurate , but slower Charniak parser ( Charniak and Johnson , 2005
C04-1021 great . Initially , plugging the Charniak parser into PRECISE yielded only 61.9
D08-1050 retraining the parser model . Since the Charniak parser does not use a lexicalized grammar
D11-1066 stituency trees , we used the Charniak parser ( Char - niak , 2000 ) . We also
D09-1161 Table 9 and Table 10 show that the Charniak parser enhanced by re-ranking and self-training
D09-1108 source side parsed by a modified Charniak parser ( Charniak 2000 ) which can output
D09-1161 confidence score and " C " means Charniak parser confidence score . 5.5 Comparison
D09-1161 score output from the enhanced Charniak parser . Table 9 and Table 10 show that
D09-1161 Berkeley parser and the enhanced Charniak parser by using the new model confidence
D08-1032 well as for the query using the Charniak parser and measured the similarity between
D09-1108 can output a packed forest . The Charniak Parser is trained on CTB5 , tuned on
D08-1093 available answer is to take the Charniak parser performance on WSJ section 24
D08-1093 ) that was used for predicting Charniak parser behavior on the Brown corpus
D11-1096 constituency trees , we used the Charniak parser ( Charniak , 2000 ) whereas we
hide detail