Using only 40 <term> utterances </term> from the <term> target speaker </term> for <term> adaptation </term> , the <term> error rate </term> dropped to 4.1 % --- a 45 % reduction in <term> error </term> compared to the <term> SI </term> result .
The <term> accuracy rate </term> of <term> syntactic disambiguation </term> is raised from 46.0 % to 60.62 % by using this novel approach .
On a subset of the most difficult <term> SENSEVAL-2 nouns </term> , the <term> accuracy </term> difference between the two approaches is only 14.0 % , and the difference could narrow further to 6.5 % if we disregard the advantage that <term> manually sense-tagged data </term> have in their <term> sense coverage </term> .
In <term> head-to-head tests </term> against one of the best existing robust <term> probabilistic parsing models </term> , which we call <term> P-CFG </term> , the <term> HBG model </term> significantly outperforms <term> P-CFG </term> , increasing the <term> parsing accuracy </term> rate from 60 % to 75 % , a 37 % reduction in error .
<term> Monolingual , unannotated text </term> can be used to further improve the <term> stemmer </term> by allowing it to adapt to a desired <term> domain </term> or <term> genre </term> .
We then turn to a discussion comparing the <term> linguistic expressiveness </term> of the two <term> formalisms </term> .
This paper presents an <term> automatic scheme </term> for collecting <term> statistics </term> on <term> co-occurrence patterns </term> in a large <term> corpus </term> . To a large extent , these <term> statistics </term> reflect <term> semantic constraints </term> and thus are used to disambiguate <term> anaphora references </term> and <term> syntactic ambiguities </term> .
Our results show that <term> MT evaluation techniques </term> are able to produce useful <term> features </term> for <term> paraphrase classification </term> and to a lesser extent <term> entailment </term> .
We go , on to describe <term> FlexP </term> , a <term> bottom-up pattern-matching parser </term> that we have designed and implemented to provide these flexibilities for <term> restricted natural language </term> input to a limited-domain computer system .
The request is passed to a <term> mobile , intelligent agent </term> for execution at the appropriate <term> database </term> .
Our <term> logical definition </term> leads to a neat relation to <term> categorial grammar </term> , ( yielding a treatment of <term> Montague semantics </term> ) , a <term> parsing-as-deduction </term> in a <term> resource sensitive logic </term> , and a <term> learning algorithm </term> from <term> structured data </term> ( based on a <term> typing-algorithm </term> and <term> type-unification </term> ) .
<term> FERRET </term> utilizes a novel approach to <term> Q/A </term> known as <term> predictive questioning </term> which attempts to identify the <term> questions </term> ( and <term> answers </term> ) that <term> users </term> need by analyzing how a <term> user </term> interacts with a system while gathering information related to a particular scenario .
We describe how this information is used in a <term> prototype system </term> designed to support <term> information workers </term> ' access to a <term> pharmaceutical news archive </term> as part of their <term> industry watch </term> function .
In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
It is argued that the method reduces <term> metaphor interpretation </term> from a <term> reconstruction </term> to a <term> recognition task </term> .
Experiments show that this approach is superior to a single <term> decision-tree classifier </term> .
Typically , information that makes it to a <term> summary </term> appears in many different <term> lexical-syntactic forms </term> in the input <term> documents </term> .
<term> Chat-80 </term> has been designed to be both efficient and easily adaptable to a variety of applications .
If a <term> computer system </term> wishes to accept <term> natural language input </term> from its <term> users </term> on a routine basis , it must display a similar indifference .
A standard <term> ATN </term> should be further developed in order to account for the <term> verbal interactions </term> of <term> task-oriented dialogs </term> .
hide detail