<term> hidden Markov models ( HMM ) </term> , which uses a large amount of <term> speech </term>
Document Understanding System ) </term> , which creates the data for a <term> text retrieval
traditional <term> statistical approaches </term> , which resolve <term> ambiguities </term> by indirectly
<term> concordancer </term> , <term> CARE </term> , which exploits the <term> move-tagged abstracts
<term> LRE project SmTA double check </term> , which is creating a <term> PC based tool </term>
with a single <term> Intel i860 chip </term> , which provides a factor of 5 speed-up over a <term>
English-Chinese parallel corpora </term> , which are then used for disambiguating the <term>
a calculus of <term> equivalences </term> , which can be used to simplify <term> formulas </term>
judge three types of the <term> errors </term> , which are characters wrongly substituted , deleted
an impediment to progress in the field , which we address with this work . Experiments
propose a <term> logical formalism </term> , which , among other things , is suitable for
argumentation system </term> by Konolige , which is a <term> formalization </term> of <term> defeasible
probabilistic model of natural language </term> , which we call <term> HBG </term> , that takes advantage
</term> called <term> alternative markers </term> , which includes <term> other ( than ) </term> , <term>
coordinate structure analysis model </term> , which provides <term> top-down scope information
<term> probabilistic parsing models </term> , which we call <term> P-CFG </term> , the <term> HBG
comparison with previous <term> models </term> , which either use arbitrary <term> windows </term>
WH-questions </term> . These <term> models </term> , which are built from <term> shallow linguistic
program </term> called <term> NOMAD </term> , which understands <term> scruffy texts </term> in
selection function </term> is presented , which yields superior <term> feature vectors </term>
hide detail