|
Using only 40
<term>
utterances
</term>
from the
<term>
target speaker
</term>
for
<term>
adaptation
</term>
, the
<term>
error rate
</term>
dropped
to
4.1 % --- a 45 % reduction in
<term>
error
</term>
compared to the
<term>
SI
</term>
result .
|
#17202
Using only 40 utterances from the target speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result. |
|
The
<term>
accuracy rate
</term>
of
<term>
syntactic disambiguation
</term>
is raised from 46.0 %
to
60.62 % by using this novel approach .
|
#17937
The accuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach. |
|
On a subset of the most difficult
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
accuracy
</term>
difference between the two approaches is only 14.0 % , and the difference could narrow further
to
6.5 % if we disregard the advantage that
<term>
manually sense-tagged data
</term>
have in their
<term>
sense coverage
</term>
.
|
#4897
On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
|
In
<term>
head-to-head tests
</term>
against one of the best existing robust
<term>
probabilistic parsing models
</term>
, which we call
<term>
P-CFG
</term>
, the
<term>
HBG model
</term>
significantly outperforms
<term>
P-CFG
</term>
, increasing the
<term>
parsing accuracy
</term>
rate from 60 %
to
75 % , a 37 % reduction in error .
|
#19041
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |
|
<term>
Monolingual , unannotated text
</term>
can be used to further improve the
<term>
stemmer
</term>
by allowing it to adapt
to
a desired
<term>
domain
</term>
or
<term>
genre
</term>
.
|
#4503
Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre. |
|
We then turn
to
a discussion comparing the
<term>
linguistic expressiveness
</term>
of the two
<term>
formalisms
</term>
.
|
#14611
We then turn to a discussion comparing the linguistic expressiveness of the two formalisms. |
|
This paper presents an
<term>
automatic scheme
</term>
for collecting
<term>
statistics
</term>
on
<term>
co-occurrence patterns
</term>
in a large
<term>
corpus
</term>
.
To
a large extent , these
<term>
statistics
</term>
reflect
<term>
semantic constraints
</term>
and thus are used to disambiguate
<term>
anaphora references
</term>
and
<term>
syntactic ambiguities
</term>
.
|
#16633
This paper presents an automatic scheme for collecting statistics on co-occurrence patterns in a large corpus. To a large extent, these statistics reflect semantic constraints and thus are used to disambiguate anaphora references and syntactic ambiguities. |
|
Our results show that
<term>
MT evaluation techniques
</term>
are able to produce useful
<term>
features
</term>
for
<term>
paraphrase classification
</term>
and
to
a lesser extent
<term>
entailment
</term>
.
|
#8414
Our results show that MT evaluation techniques are able to produce useful features for paraphrase classification and to a lesser extent entailment. |
|
We go , on to describe
<term>
FlexP
</term>
, a
<term>
bottom-up pattern-matching parser
</term>
that we have designed and implemented to provide these flexibilities for
<term>
restricted natural language
</term>
input
to
a limited-domain computer system .
|
#12783
We go, on to describe FlexP, a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system. |
|
The request is passed
to
a
<term>
mobile , intelligent agent
</term>
for execution at the appropriate
<term>
database
</term>
.
|
#851
The request is passed to a mobile, intelligent agent for execution at the appropriate database. |
|
Our
<term>
logical definition
</term>
leads
to
a neat relation to
<term>
categorial grammar
</term>
, ( yielding a treatment of
<term>
Montague semantics
</term>
) , a
<term>
parsing-as-deduction
</term>
in a
<term>
resource sensitive logic
</term>
, and a
<term>
learning algorithm
</term>
from
<term>
structured data
</term>
( based on a
<term>
typing-algorithm
</term>
and
<term>
type-unification
</term>
) .
|
#1950
Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and a learning algorithm from structured data (based on a typing-algorithm and type-unification). |
|
<term>
FERRET
</term>
utilizes a novel approach to
<term>
Q/A
</term>
known as
<term>
predictive questioning
</term>
which attempts to identify the
<term>
questions
</term>
( and
<term>
answers
</term>
) that
<term>
users
</term>
need by analyzing how a
<term>
user
</term>
interacts with a system while gathering information related
to
a particular scenario .
|
#11691
FERRET utilizes a novel approach to Q/A known as predictive questioning which attempts to identify the questions (and answers) that users need by analyzing how a user interacts with a system while gathering information related to a particular scenario. |
|
We describe how this information is used in a
<term>
prototype system
</term>
designed to support
<term>
information workers
</term>
' access
to
a
<term>
pharmaceutical news archive
</term>
as part of their
<term>
industry watch
</term>
function .
|
#331
We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function. |
|
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed
to
a
<term>
phone-string classifier
</term>
.
|
#2285
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |
|
It is argued that the method reduces
<term>
metaphor interpretation
</term>
from a
<term>
reconstruction
</term>
to
a
<term>
recognition task
</term>
.
|
#12510
It is argued that the method reduces metaphor interpretation from a reconstructionto a recognition task. |
|
Experiments show that this approach is superior
to
a single
<term>
decision-tree classifier
</term>
.
|
#7063
Experiments show that this approach is superior to a single decision-tree classifier. |
|
Typically , information that makes it
to
a
<term>
summary
</term>
appears in many different
<term>
lexical-syntactic forms
</term>
in the input
<term>
documents
</term>
.
|
#7182
Typically, information that makes it to a summary appears in many different lexical-syntactic forms in the input documents. |
|
<term>
Chat-80
</term>
has been designed to be both efficient and easily adaptable
to
a variety of applications .
|
#12865
Chat-80 has been designed to be both efficient and easily adaptable to a variety of applications. |
|
If a
<term>
computer system
</term>
wishes
to
accept
<term>
natural language input
</term>
from its
<term>
users
</term>
on a routine basis , it must display a similar indifference .
|
#12718
If a computer system wishes to accept natural language input from its users on a routine basis, it must display a similar indifference. |
|
A standard
<term>
ATN
</term>
should be further developed in order
to
account for the
<term>
verbal interactions
</term>
of
<term>
task-oriented dialogs
</term>
.
|
#12430
A standard ATN should be further developed in order to account for the verbal interactions of task-oriented dialogs. |