C90-2071 How - ever , this " Maximal Conventionality Principle " can easily be overruled by global considerations arising from the embedding phrase and context .
C90-3012 We will attempt to show how human performance limitations on various types of syntactic embedding constructions in Germanic languages can be modelled in a relational network linguistic framework .
C92-2072 Note that these ` errors ' are not syntactically incorrect , but are constructions which , if overused , may result in poor writing , and as such are often included in style-checker ` hit-lists ' ; thus , we would include multiple embedding constructions , poten - ACT , S DE COLING-92 , NANTES , 23-28 AOIJT 1992 4 6 8 PROC .
C92-2095 ( = S ) on tile right and so eliminate unnecessary center - embedding ; and ( 3 ) eliminating of scrambling and NP drop to isolate tile separate effects of llead-final ( e.g. , Verb-final ) l ) hrase structure in Japanese .
C92-3137 In turn , simulative reasoning call produce resuits app \ -RSB- icable to the attitude context where it takes place , and may sometimes affect related contexts , e.g. , causing re-Evaluation of the attitude in the embedding attitude contexts .
C96-1045 F-structures are either interpreted indirectly in terms of a homomorphic embedding into Quasi Logical Form ( QLF ) ( Alshawi , 1992 ; Alshawi & Crouch , 1992 ; Cooper et al. , 1994a ) representations or directly in terms of adapting QLF interpretation clauses to f-structure representations .
D08-1086 By simulations , the effectiveness of the analyzer is investigated , and then a LVCSR system embedding the presented analyzer is evaluated .
D10-1116 This method ensures that each word encodes a unique sequence of bits , without cutting out large number of synonyms , and thus maintaining a reasonable embedding capacity .
D12-1086 Our best model based on Euclidean co-occurrence embedding combines the paradigmatic context representation with morphological and orthographic features and achieves 80 % many-to-one accuracy on a 45-tag 1M word corpus .
D14-1012 However , fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain .
D14-1012 In this study , we investigate and analyze three different approaches , including a new proposed distributional prototype approach , for utilizing the embedding features .
D14-1012 Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features , among which the distributional prototype approach performs the best .
D14-1012 Moreover , the combination of the approaches provides additive im - provements , outperforming the dense and continuous embedding features by nearly 2 points of F1 score .
D14-1015 We investigate how to improve bilingual embedding which has been successfully used as a feature in phrase-based statistical machine translation ( SMT ) .
D14-1015 Despite bilingual embedding 's success , the contextual information , which is of critical importance to translation quality , was ignored in previous work .
D14-1015 To employ the contextual information , we propose a simple and memory-efficient model for learning bilingual embedding , taking both the source phrase and context around the phrase into account .
D14-1015 Bilingual translation scores generated from our proposed bilingual embedding model are used as features in our SMT system .
D14-1030 Many statistical models for natural language processing exist , including context-based neural networks that ( 1 ) model the previously seen context as a latent feature vector , ( 2 ) integrate successive words into the context using some learned representation ( embedding ) , and ( 3 ) compute output probabilities for incoming words given the context .
D14-1030 Sec - ondly , the neural network embedding of word i can predict the MEG activity when word i is presented to the subject , revealing that it is correlated with the brain 's own representation of word i. Moreover , we obtain that the activity is predicted in different regions of the brain with varying delay .
D14-1062 By embedding our latent domain phrase model in a sentence-level model and training the two in tandem , we are able to adapt all core translation components together -- phrase , lexical and reordering .
hide detail