C80-1074 |
Let fII and fIII be the nominalization and the
embedding
/embedding/NN
operator respectively .
|
A00-2015 |
This paper proposes a statistical method for learning dependency preference of Japanese subordinate clauses , in which scope
embedding
/embedding/NN
preference of subordinate clauses is exploited as a useful information source for disambiguating dependencies between subordinate clauses .
|
D12-1086 |
Our best model based on Euclidean co-occurrence
embedding
/embedding/NN
combines the paradigmatic context representation with morphological and orthographic features and achieves 80 % many-to-one accuracy on a 45-tag 1M word corpus .
|
C90-2071 |
How - ever , this " Maximal Conventionality Principle " can easily be overruled by global considerations arising from the
embedding
/embedding/NN
phrase and context .
|
C80-1074 |
Then , the modal operator ( Fill , FII 2 and FI21 ) ,
embedding
/embedding/NN
operator fill and connecting operator FIV are extracted by investigating the variety and the inflectional form of the predicate or the words which follow the predicate .
|
C92-3137 |
In turn , simulative reasoning call produce resuits app \ -RSB- icable to the attitude context where it takes place , and may sometimes affect related contexts , e.g. , causing re-Evaluation of the attitude in the
embedding
/embedding/NN
attitude contexts .
|
C92-2095 |
( = S ) on tile right and so eliminate unnecessary center -
embedding
/embedding/NN
; and ( 3 ) eliminating of scrambling and NP drop to isolate tile separate effects of llead-final ( e.g. , Verb-final ) l ) hrase structure in Japanese .
|
C80-1074 |
They are classified into six groups , that is , modal ( Fl ) , nominalization ( FII ) ,
embedding
/embedding/NN
( fill ) , connecting ( FIV ) , elliptical ( F V ) and anaphoric operator ( Fvl ) .
|
C92-2072 |
Note that these ` errors ' are not syntactically incorrect , but are constructions which , if overused , may result in poor writing , and as such are often included in style-checker ` hit-lists ' ; thus , we would include multiple
embedding
/embedding/NN
constructions , poten - ACT , S DE COLING-92 , NANTES , 23-28 AOIJT 1992 4 6 8 PROC .
|
C82-2022 |
- I01 - 1.2 , Another problem is proposed by the structural ambi - 8 ~ ity of sequences of PP 's , for which both the "
embedding
/embedding/NN
" and the " same-level " hypotheses are presented !
|
D14-1012 |
In this study , we investigate and analyze three different approaches , including a new proposed distributional prototype approach , for utilizing the
embedding
/embedding/NN
features .
|
D14-1012 |
Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word
embedding
/embedding/NN
features , among which the distributional prototype approach performs the best .
|
D14-1012 |
Moreover , the combination of the approaches provides additive im - provements , outperforming the dense and continuous
embedding
/embedding/NN
features by nearly 2 points of F1 score .
|
D14-1015 |
We investigate how to improve bilingual
embedding
/embedding/NN
which has been successfully used as a feature in phrase-based statistical machine translation ( SMT ) .
|
D14-1015 |
Despite bilingual
embedding
/embedding/NN
's success , the contextual information , which is of critical importance to translation quality , was ignored in previous work .
|
D14-1015 |
To employ the contextual information , we propose a simple and memory-efficient model for learning bilingual
embedding
/embedding/NN
, taking both the source phrase and context around the phrase into account .
|
D14-1015 |
Bilingual translation scores generated from our proposed bilingual
embedding
/embedding/NN
model are used as features in our SMT system .
|
D14-1030 |
Many statistical models for natural language processing exist , including context-based neural networks that ( 1 ) model the previously seen context as a latent feature vector , ( 2 ) integrate successive words into the context using some learned representation (
embedding
/embedding/NN
) , and ( 3 ) compute output probabilities for incoming words given the context .
|
D14-1030 |
Sec - ondly , the neural network
embedding
/embedding/NN
of word i can predict the MEG activity when word i is presented to the subject , revealing that it is correlated with the brain 's own representation of word i. Moreover , we obtain that the activity is predicted in different regions of the brain with varying delay .
|
D14-1062 |
By
embedding
/embed/VBG
our latent domain phrase model in a sentence-level model and training the two in tandem , we are able to adapt all core translation components together -- phrase , lexical and reordering .
|