. In this presentation , we describe the features of and <term> requirements </term> for a genuinely
called a <term> semantic frame </term> . The key features of the <term> system </term> include : ( i
other,7-2-P01-1070,bq which are built from <term> shallow linguistic features </term> of <term> questions </term> , are employed
other,36-1-N03-1033,bq </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning
other,66-1-N03-1033,bq fine-grained modeling of <term> unknown word features </term> . Using these ideas together , the
other,12-3-P03-1002,bq based on : ( 1 ) an extended set of <term> features </term> ; and ( 2 ) <term> inductive decision
other,5-3-P03-1022,bq non-NP-antecedents </term> . We present a set of <term> features </term> designed for <term> pronoun resolution
other,18-3-P03-1022,bq </term> and determine the most promising <term> features </term> . We evaluate the <term> system </term>
other,13-3-C04-1035,bq create a set of <term> domain independent features </term> to annotate an input <term> dataset
other,5-5-C04-1035,bq approx 90 % . The results show that the <term> features </term> in terms of which we formulate our
other,35-5-C04-1035,bq be learnt automatically from these <term> features </term> . We suggest a new goal and evaluation
other,6-3-C04-1068,bq </term> . In this paper , we identify <term> features </term> of <term> electronic discussions </term>
other,11-4-C04-1116,bq most of the words with similar <term> context features </term> in each author 's <term> corpus </term>
other,4-3-C04-1128,bq summarization </term> . We show that various <term> features </term> based on the structure of <term> email-threads
other,3-3-N04-1024,bq essays </term> . This system identifies <term> features </term> of <term> sentences </term> based on <term>
other,6-4-N04-1024,bq support vector machine </term> uses these <term> features </term> to capture <term> breakdowns in coherence
other,38-4-N04-4028,bq to capture arbitrary , overlapping <term> features </term> of the input in a <term> Markov model
other,12-4-I05-5003,bq techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification
other,14-3-J05-1003,bq <term> ranking </term> , using additional <term> features </term> of the <term> tree </term> as evidence
other,19-4-J05-1003,bq represented as an arbitrary set of <term> features </term> , without concerns about how these
hide detail