N07-2004 phonological features to independent feature detectors in a Conditional Random Fields
P14-1062 as - pects . The range of the feature detectors is limited to the span m of the
P14-1062 separate n-grams . We visualise the feature detectors in the first layer of the network
P14-1062 filter m correspond to a linguistic feature detector that learns to recog - nise a
D15-1279 vector . Let nc be the number of feature detectors . The output of the tree-based
P14-1062 experiments and we inspect the learnt feature detectors . 5.1 Training In each of the
D09-1035 nonlinear feature detectors and linear feature detectors in the final layer . As shown
D15-1279 the output layer and underlying feature detectors , enabling effective structural
N07-2004 the independently trained MLP feature detectors used in previous work . 2 Conditional
P14-1062 representation . With a folding layer , a feature detector of the i-th order depends now
N07-2004 Here we evaluate phonological feature detectors created from MLP phone posterior
P14-1062 reason higher-order and long-range feature detectors can not be easily incorporated
D09-1035 autoencoder with stochastic nonlinear feature detectors and linear feature detectors
D15-1279 design a set of fixed-depth subtree feature detectors , called the tree-based convolution
D15-1206 networks . By randomly omitting feature detectors from the network during train
P14-1062 in order to learn fine-grained feature detectors , it is beneficial for a model
P14-1062 formulation of the network so far , feature detectors applied to an individual row
P14-1062 the DCNN is associated with a feature detector or neuron that learns during
P14-1062 width 7 , for each of the 288 feature detectors we rank all 7-grams occurring
D13-1176 and can be thought of as learnt feature detectors . From the sentence matrix Ee
hide detail