. In this presentation , we describe the features of and <term> requirements </term> for a genuinely
called a <term> semantic frame </term> . The key features of the <term> system </term> include : ( i
other,7-2-P01-1070,bq which are built from <term> shallow linguistic features </term> of <term> questions </term> , are employed
other,36-1-N03-1033,bq </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning
other,12-3-P03-1002,bq based on : ( 1 ) an extended set of <term> features </term> ; and ( 2 ) <term> inductive decision
other,5-3-P03-1022,bq non-NP-antecedents </term> . We present a set of <term> features </term> designed for <term> pronoun resolution
other,13-3-C04-1035,bq create a set of <term> domain independent features </term> to annotate an input <term> dataset
other,6-3-C04-1068,bq </term> . In this paper , we identify <term> features </term> of <term> electronic discussions </term>
other,11-4-C04-1116,bq most of the words with similar <term> context features </term> in each author 's <term> corpus </term>
other,4-3-C04-1128,bq summarization </term> . We show that various <term> features </term> based on the structure of <term> email-threads
other,3-3-N04-1024,bq essays </term> . This system identifies <term> features </term> of <term> sentences </term> based on <term>
other,38-4-N04-4028,bq to capture arbitrary , overlapping <term> features </term> of the input in a <term> Markov model
other,12-4-I05-5003,bq techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification
other,14-3-J05-1003,bq <term> ranking </term> , using additional <term> features </term> of the <term> tree </term> as evidence
other,15-3-P05-1069,bq model </term> which uses <term> real-valued features </term> ( e.g. a <term> language model score
other,11-5-E06-1018,bq <term> sentence co-occurrences </term> as <term> features </term> allows for accurate results . Additionally
other,21-2-E06-1022,bq utterance </term> and <term> conversational context features </term> . Then , we explore whether information
other,5-5-E06-1035,bq </term> . Examination of the effect of <term> features </term> shows that <term> predicting top-level
other,18-1-P06-2012,bq use of various <term> lexical and syntactic features </term> from the <term> contexts </term> . It
conversation transcripts </term> etc. , have features that differ significantly from <term> neat
hide detail