tech,9-9-J05-1003,ak | a new <term> algorithm </term> for the <term> | boosting approach | </term> which takes advantage of the <term> | #8235 The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. | |
model,40-4-J05-1003,ak | define a <term> derivation </term> or a <term> | generative model | </term> which takes these <term> features </term> | #8115 The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or agenerative model which takes these features into account. | |
tech,21-11-J05-1003,ak | simplicity and efficiency — to work on <term> | feature selection methods | </term> within <term> log-linear ( maximum-entropy | #8291 We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |