other,19-6-E06-1035,bq <term> lexical-cohesion and conversational features </term> , but do not change the general preference
other,36-1-N03-1033,bq </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning
other,19-4-J05-1003,bq represented as an arbitrary set of <term> features </term> , without concerns about how these
other,12-3-P03-1002,bq based on : ( 1 ) an extended set of <term> features </term> ; and ( 2 ) <term> inductive decision
identities themselves , e.g. block bigram features . Our <term> training algorithm </term> can
other,8-4-P05-1069,bq </term> can easily handle millions of <term> features </term> . The best system obtains a 18.6
other,21-2-E06-1022,bq utterance </term> and <term> conversational context features </term> . Then , we explore whether information
other,66-1-N03-1033,bq fine-grained modeling of <term> unknown word features </term> . Using these ideas together , the
other,18-3-P03-1022,bq </term> and determine the most promising <term> features </term> . We evaluate the <term> system </term>
other,35-5-C04-1035,bq be learnt automatically from these <term> features </term> . We suggest a new goal and evaluation
other,15-3-P05-1069,bq model </term> which uses <term> real-valued features </term> ( e.g. a <term> language model score
other,11-5-E06-1018,bq <term> sentence co-occurrences </term> as <term> features </term> allows for accurate results . Additionally
other,9-4-E06-1022,bq conversational context </term> and <term> utterance features </term> are combined with <term> speaker 's
other,4-3-C04-1128,bq summarization </term> . We show that various <term> features </term> based on the structure of <term> email-threads
other,27-3-P05-1069,bq model score </term> ) as well as <term> binary features </term> based on the <term> block </term> identities
other,5-3-P03-1022,bq non-NP-antecedents </term> . We present a set of <term> features </term> designed for <term> pronoun resolution
other,12-4-I05-5003,bq techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification
other,18-1-P06-2012,bq use of various <term> lexical and syntactic features </term> from the <term> contexts </term> . It
other,11-4-C04-1116,bq most of the words with similar <term> context features </term> in each author 's <term> corpus </term>
other,5-5-C04-1035,bq approx 90 % . The results show that the <term> features </term> in terms of which we formulate our
hide detail