segments of actual tape-recorded descriptions , using <term> organizational and discourse strategies
sense disambiguation performance </term> , using standard <term> WSD evaluation methodology
improve upon this initial <term> ranking </term> , using additional <term> features </term> of the <term>
our best condition for this test suite , using 109 <term> training speakers </term> . Second
a <term> token classification task </term> , using various <term> tagging strategies </term> to
for <term> speaker adaptation ( SA ) </term> using the new <term> SI corpus </term> and a small
surprisingly close to what can be achieved using conventional <term> word-trigram recognition
for <term> Japanese sentence analyses </term> using an <term> argumentation system </term> by Konolige
<term> Communicator </term> participants are using . In this presentation , we describe the
describe the methods and hardware that we are using to produce a real-time demonstration of
proprietary <term> Arabic stemmer </term> built using <term> rules </term> , <term> affix lists </term>
the sum of each <term> character </term> . By using commands or <term> rules </term> which are
</term> and <term> linguistic pattern </term> . By using them , we can automatically extract such
</term> is raised from 46.0 % to 60.62 % by using this novel approach . <term> Graph unification
in the search space </term> is achieved by using <term> semantic </term> rather than <term> syntactic
performance gains from the <term> data </term> by using <term> class-dependent interpolation </term>
</term> which is parsed very efficiently by using the <term> parse record </term> of the first
sense per collocation observation </term> by using triplets of <term> words </term> instead of
<term> word string </term> has been obtained by using a different <term> LM </term> . Actually ,
Sentence ambiguities </term> can be resolved by using domain targeted preference knowledge without
hide detail