C80-1074 be the nominalization and the embedding /embedding/NN operator respectively . An arbitrary
A00-2015 subordinate clauses , in which scope embedding /embedding/NN preference of subordinate clauses
D12-1086 based on Euclidean co-occurrence embedding /embedding/NN combines the paradigmatic context
C90-2071 considerations arising from the embedding /embedding/NN phrase and context . Figure 1
C80-1074 operator ( Fill , FII 2 and FI21 ) , embedding /embedding/NN operator fill and connecting
C92-3137 re-Evaluation of the attitude in the embedding /embedding/NN attitude contexts . Thus , ill
C92-2095 eliminate unnecessary center - embedding /embedding/NN ; and ( 3 ) eliminating of scrambling
C80-1074 Fl ) , nominalization ( FII ) , embedding /embedding/NN ( fill ) , connecting ( FIV )
C92-2072 thus , we would include multiple embedding /embedding/NN constructions , poten - ACT ,
C82-2022 of PP 's , for which both the " embedding /embedding/NN " and the " same-level " hypotheses
D14-1012 prototype approach , for utilizing the embedding /embedding/NN features . The presented approaches
D14-1012 approaches can better utilize the word embedding /embedding/NN features , among which the distributional
D14-1012 outperforming the dense and continuous embedding /embedding/NN features by nearly 2 points of
D14-1015 investigate how to improve bilingual embedding /embedding/NN which has been successfully used
D14-1015 translation ( SMT ) . Despite bilingual embedding /embedding/NN 's success , the contextual information
D14-1015 memory-efficient model for learning bilingual embedding /embedding/NN , taking both the source phrase
D14-1015 generated from our proposed bilingual embedding /embedding/NN model are used as features in
D14-1030 some learned representation ( embedding /embedding/NN ) , and ( 3 ) compute output
D14-1030 Sec - ondly , the neural network embedding /embedding/NN of word i can predict the MEG
D14-1062 relevance for the in-domain task . By embedding /embed/VBG our latent domain phrase model
hide detail