P14-1138 penalty function that ensures word embedding consistency across two directional
E89-1033 has been implemented , and an embedding of this in an interactive parsing
D15-1038 completion impute missing facts by embedding knowledge graphs in vector spaces
P06-2071 from the image and text of the embedding web page . We evaluate our method
P15-2002 show that , although an isometric embedding is untractable , it is possible
S14-2011 provides dense , low-dimensional embedding for each fragment which allows
C80-1074 Fl ) , nominalization ( FII ) , embedding ( fill ) , connecting ( FIV )
P10-1121 predicting reading times , and that embedding difference makes a significant
J00-3002 The unacceptability of centre embedding is illustrated by the fact that
W10-2802 ularities . This latter is employed by embedding prior FrameNet-derived knowledge
D14-1012 prototype approach , for utilizing the embedding features . The presented approaches
H90-1008 the second only does so if no embedding link exists at the current focus
C86-1088 first-order model structure . A proper embedding is a function from 1J / ~ to
D15-1183 , we propose a generative word embedding model , which is easy to interpret
N12-1088 input and output spaces . This embedding is learned in such a way that
J14-2006 secret bitstring , the secret embedding fails . Therefore , it is important
D15-1196 outperforms the state-of-the-art word embedding methods in both representation
H91-1106 the sentence generator ; and the embedding of all these parts into the joint
T87-1012 they all assiduously avoid center embedding in favor of strongly left - or
P13-1078 translation models with/without embedding features on Chinese-to-English
hide detail