C04-1033 procedure is the counterpart of the training procedure . Given a testing docu - ment
A00-2030 used the following multi-step training procedure which exploited the Penn TREEBANK
C04-1006 we will now sketch the standard training procedure for the lexicon model . The EM
C04-1006 translation probabilities . The standard training procedure of the statistical models uses
A94-1030 bilingual corpus . The unsupervised training procedure is described in detail in Fung
C00-2098 top of the parser stack . The training procedure of our probabilistic parser is
A97-1004 invalid sentence boundary . The training procedure requires no hand-crafted rules
C04-1045 ) P ~ f Chier ( ~ f ; e ) The training procedure for the other model parameters
C04-1080 art results without the lengthy training procedure involved in other highperforming
C04-1060 in one language is given to the training procedure . It is important to note , however
C00-2141 local context templates made the training procedure very easy . * The three-stage
A83-1031 testing has begun using the same training procedure . It is too early to report results
C04-1032 computed as the result of the training procedure . In the source-totarget translation
A97-1051 subsequent manual or automatic training procedures . However , much of the drudgery
C04-1033 NPk = arg maxNP i2Ck jSNPij 3.2 Training procedure Given an annotated training document
C00-2141 very easy . * The three-stage training procedure guarantees that only the useful
A83-1031 has not yet been possible . The training procedure has two parts . The first part
C04-1045 2001 ) for the conventional EM training procedure . Experimental results are reported
C04-1060 estimated with complete EM , while the training procedure for the IBM models samples from
C04-1032 in a fast and robust alignment training procedure . We also tested the more simple
hide detail