other,66-1-N03-1033,bq fine-grained modeling of <term> unknown word features </term> . Using these ideas together , the
other,21-2-E06-1022,bq utterance </term> and <term> conversational context features </term> . Then , we explore whether information
other,6-3-C04-1068,bq </term> . In this paper , we identify <term> features </term> of <term> electronic discussions </term>
other,6-4-N04-1024,bq support vector machine </term> uses these <term> features </term> to capture <term> breakdowns in coherence
other,27-3-P05-1069,bq model score </term> ) as well as <term> binary features </term> based on the <term> block </term> identities
other,8-1-P86-1038,bq </term> use structures containing sets of <term> features </term> to describe <term> linguistic objects
other,12-4-I05-5003,bq techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification
</term> to fit . One of the distinguishing features of a more <term> linguistically sophisticated
other,8-4-P05-1069,bq </term> can easily handle millions of <term> features </term> . The best system obtains a 18.6
other,51-5-E06-1035,bq <term> lexical-cohesion and conversational features </term> performs best , and ( 3 ) <term> conversational
other,18-1-P06-2012,bq use of various <term> lexical and syntactic features </term> from the <term> contexts </term> . It
other,23-7-J05-1003,bq evidence from an additional 500,000 <term> features </term> over <term> parse trees </term> that
other,19-4-J05-1003,bq represented as an arbitrary set of <term> features </term> , without concerns about how these
other,7-2-P01-1070,bq which are built from <term> shallow linguistic features </term> of <term> questions </term> , are employed
. In this presentation , we describe the features of and <term> requirements </term> for a genuinely
other,15-3-P05-1069,bq model </term> which uses <term> real-valued features </term> ( e.g. a <term> language model score
identities themselves , e.g. block bigram features . Our <term> training algorithm </term> can
other,14-3-J05-1003,bq <term> ranking </term> , using additional <term> features </term> of the <term> tree </term> as evidence
other,19-6-E06-1035,bq <term> lexical-cohesion and conversational features </term> , but do not change the general preference
other,13-3-C04-1035,bq create a set of <term> domain independent features </term> to annotate an input <term> dataset
hide detail