In this presentation , we describe the features of and <term> requirements </term> for a genuinely useful <term> software infrastructure </term> for this purpose .
The key features of the <term> system </term> include : ( i ) Robust efficient <term> parsing </term> of <term> Korean </term> ( a <term> verb final language </term> with <term> overt case markers </term> , relatively <term> free word order </term> , and frequent omissions of <term> arguments </term> ) .
other,7-2-P01-1070,bq These <term> models </term> , which are built from <term> shallow linguistic features </term> of <term> questions </term> , are employed to predict target variables which represent a <term> user 's informational goals </term> .
other,36-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,66-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,12-3-P03-1002,bq It is based on : ( 1 ) an extended set of <term> features </term> ; and ( 2 ) <term> inductive decision tree learning </term> .
other,5-3-P03-1022,bq We present a set of <term> features </term> designed for <term> pronoun resolution </term> in <term> spoken dialogue </term> and determine the most promising <term> features </term> .
other,18-3-P03-1022,bq We present a set of <term> features </term> designed for <term> pronoun resolution </term> in <term> spoken dialogue </term> and determine the most promising <term> features </term> .
other,13-3-C04-1035,bq We then use the <term> predicates </term> of such <term> clauses </term> to create a set of <term> domain independent features </term> to annotate an input <term> dataset </term> , and run two different <term> machine learning algorithms </term> : <term> SLIPPER </term> , a <term> rule-based learning algorithm </term> , and <term> TiMBL </term> , a <term> memory-based system </term> .
other,5-5-C04-1035,bq The results show that the <term> features </term> in terms of which we formulate our <term> heuristic principles </term> have significant <term> predictive power </term> , and that <term> rules </term> that closely resemble our <term> Horn clauses </term> can be learnt automatically from these <term> features </term> .
other,35-5-C04-1035,bq The results show that the <term> features </term> in terms of which we formulate our <term> heuristic principles </term> have significant <term> predictive power </term> , and that <term> rules </term> that closely resemble our <term> Horn clauses </term> can be learnt automatically from these <term> features </term> .
other,6-3-C04-1068,bq In this paper , we identify <term> features </term> of <term> electronic discussions </term> that influence the <term> clustering process </term> , and offer a <term> filtering mechanism </term> that removes undesirable influences .
other,11-4-C04-1116,bq According to our assumption , most of the words with similar <term> context features </term> in each author 's <term> corpus </term> tend not to be <term> synonymous expressions </term> .
other,4-3-C04-1128,bq We show that various <term> features </term> based on the structure of <term> email-threads </term> can be used to improve upon <term> lexical similarity </term> of <term> discourse segments </term> for <term> question-answer pairing </term> .
other,3-3-N04-1024,bq This system identifies <term> features </term> of <term> sentences </term> based on <term> semantic similarity measures </term> and <term> discourse structure </term> .
other,6-4-N04-1024,bq A <term> support vector machine </term> uses these <term> features </term> to capture <term> breakdowns in coherence </term> due to relatedness to the <term> essay question </term> and relatedness between <term> discourse elements </term> .
other,38-4-N04-4028,bq The <term> information extraction system </term> we evaluate is based on a <term> linear-chain conditional random field ( CRF ) </term> , a <term> probabilistic model </term> which has performed well on <term> information extraction tasks </term> because of its ability to capture arbitrary , overlapping <term> features </term> of the input in a <term> Markov model </term> .
other,12-4-I05-5003,bq Our results show that <term> MT evaluation techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification </term> and to a lesser extent <term> entailment </term> .
other,14-3-J05-1003,bq A second <term> model </term> then attempts to improve upon this initial <term> ranking </term> , using additional <term> features </term> of the <term> tree </term> as evidence .
other,19-4-J05-1003,bq The strength of our <term> approach </term> is that it allows a <term> tree </term> to be represented as an arbitrary set of <term> features </term> , without concerns about how these <term> features </term> interact or overlap and without the need to define a <term> derivation </term> or a <term> generative model </term> which takes these <term> features </term> into account .
hide detail