J80-1001 ATN 's have the advantage of being a class of automata into which ordinary context-free phrase structure and " augmented " phrase structure grammars have a straightforward embedding , but which permit various transformations to be performed to produce grammars that can be more efficient than the original .
W15-3820 Such word vector representations , also known as word embedding , have been shown to improve the performance of machine learning models in several NLP tasks .
C00-1055 We have experimented with lnore elaborate functions that indicate how balanced the parse tree is and less complicated functions such as the level of embedding , number of parentheses , and so oil .
J00-3002 We argue that an incremental procedure of proof net construction affords an account of various processing phenomena , including garden pathing , the unacceptability of center embedding , preference for lower attach - ment , left-to-right quantifier scope preference , and heavy noun phrase shift .
P13-2087 We propose a method that takes as input an existing embedding , some labeled data , and produces an embedding in the same space , but with a better predictive performance in the supervised task .
W15-4310 Besides word embedding , we use partof-speech ( POS ) tags , chunks , and brown clusters induced from Wikipedia as fea - tures .
W10-4104 The discourse structures in other examples of this section are embedding , while this example is of overlapping type .
C92-2095 ( = S ) on tile right and so eliminate unnecessary center - embedding ; and ( 3 ) eliminating of scrambling and NP drop to isolate tile separate effects of llead-final ( e.g. , Verb-final ) l ) hrase structure in Japanese .
C80-1009 As he points out , this is a simple and natural way of treating any construction capable of unlimited embedding .
W11-1827 However , diverging from that system 's more pragmatic na - ture , we more clearly distinguish the shared task concerns from a general semantic composition scheme , that is based on the notion of embedding .
C82-2022 - I01 - 1.2 , Another problem is proposed by the structural ambi - 8 ~ ity of sequences of PP 's , for which both the " embedding " and the " same-level " hypotheses are presented !
C80-1074 They are classified into six groups , that is , modal ( Fl ) , nominalization ( FII ) , embedding ( fill ) , connecting ( FIV ) , elliptical ( F V ) and anaphoric operator ( Fvl ) .
J98-4006 Chomsky 's argument that finite-state devices are not able to represent natural language structures , especially those involving central embedding ( recursion ) , was one of the reasons for this fact .
S14-2033 Coooolll is built in a supervised learning framework by concatenating the sentiment-specific word embedding ( SSWE ) features with the state-of-the-art hand-crafted features .
D14-1030 Many statistical models for natural language processing exist , including context-based neural networks that ( 1 ) model the previously seen context as a latent feature vector , ( 2 ) integrate successive words into the context using some learned representation ( embedding ) , and ( 3 ) compute output probabilities for incoming words given the context .
D15-1031 We study the problem of jointly embedding a knowledge base and a text corpus .
J82-3001 The intuitive notion of " embedding a linguistic theory into a model of language use " as it is generally construed is much stronger than this , since it implies that the parsing system follows some ( perhaps all ) of the same operating principles as the linguistic system , and makes reference in its operation to the same system of rules .
T78-1035 Linguists have long recognized the desirability of embedding a theory of grammar within a theory of linguistic performance ( scc , e.g. , Chomsky ( 1965 ; 10-15 ) ) .
P15-2098 We show that radical embedding achieves com - parable , and sometimes even better , results than competing methods .
D15-1306 In quantitative analysis , we show that lexical and syntactic features are useful for automatic categorization of annoying behav - iors , and frame-semantic features further boost the performance ; that leveraging large lexical embeddings to create additional training instances significantly improves the lexical model ; and incorporating frame-semantic embedding achieves the best overall performance .
hide detail