|
a definition of
<term>
typicality
</term>
--
|
a
|
concept which plays an important role in
|
#13726
As a simple application of the techniques described in this paper, we formulate a definition of typicality -- a concept which plays an important role in human cognition and is of relevance to default reasoning. |
|
<term>
error rate
</term>
dropped to 4.1 % ---
|
a
|
45 % reduction in
<term>
error
</term>
compared
|
#17206
Using only 40 utterances from the target speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result. |
|
conditional random field ( CRF )
</term>
,
|
a
|
<term>
probabilistic model
</term>
which has
|
#6829
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
|
treatment of
<term>
Montague semantics
</term>
) ,
|
a
|
<term>
parsing-as-deduction
</term>
in a
<term>
|
#1967
Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and a learning algorithm from structured data (based on a typing-algorithm and type-unification). |
|
the
<term>
linguistic structure
</term>
) ,
|
a
|
structure of
<term>
purposes
</term>
( called
|
#14131
In this theory, discourse structure is composed of three separate but interrelated components: the structure of the sequence of utterances (called the linguistic structure), a structure of purposes (called the intentional structure), and the state of focus of attention (called the attentional state). |
|
accuracy
</term>
rate from 60 % to 75 % ,
|
a
|
37 % reduction in error . We discuss
<term>
|
#19045
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |
|
allows for accurate results . Additionally ,
|
a
|
novel and likewise automatic and
<term>
unsupervised
|
#10185
Additionally, a novel and likewise automatic and unsupervised evaluation method inspired by Schutze's (1992) idea of evaluation of word sense disambiguation algorithms is employed. |
|
planning and building
<term>
COMPLEX
</term>
,
|
a
|
<term>
computational lexicon
</term>
designed
|
#15969
This paper presents our experience in planning and building COMPLEX, a computational lexicon designed to be a repository of shared lexical information for use by Natural Language Processing (NLP) systems. |
|
<term>
bilingual parallel corpora
</term>
,
|
a
|
much more commonly available
<term>
resource
|
#9680
We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. |
|
length-based or translation-based criterion
</term>
,
|
a
|
<term>
part-of-speech-based criterion
</term>
|
#20550
Rather than using length-based or translation-based criterion, a part-of-speech-based criterion is proposed. |
|
community
</term>
. Over the last decade ,
|
a
|
variety of
<term>
SMT algorithms
</term>
have
|
#9961
Over the last decade, a variety of SMT algorithms have been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT. |
|
that it contains . We give two estimates ,
|
a
|
lower one and a higher one . As an
<term>
|
#5935
We give two estimates, a lower one and a higher one. |
|
feature vector
</term>
quality . Finally ,
|
a
|
novel
<term>
feature weighting and selection
|
#5358
Finally, a novel feature weighting and selection function is presented, which yields superior feature vectors and better word similarity performance. |
|
task into two distinct phases . First ,
|
a
|
very simple ,
<term>
randomized sentence-plan-generator
|
#1376
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plans for a given text-plan input. |
|
We go , on to describe
<term>
FlexP
</term>
,
|
a
|
<term>
bottom-up pattern-matching parser
</term>
|
#12764
We go, on to describe FlexP, a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system. |
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
,
|
a
|
13 % relative decrease in
<term>
F-measure
|
#8843
The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
|
</term>
is presented . Under this framework ,
|
a
|
<term>
joint source-channel transliteration
|
#5777
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
|
million ( for
<term>
Czech
</term>
) . Further ,
|
a
|
special method has been developed for easy
|
#16854
Further, a special method has been developed for easy word classification. |
|
describing the layout of an apartment or house ,
|
a
|
much-studied
<term>
discourse task
</term>
|
#15453
We have developed a computational model of the process of describing the layout of an apartment or house, a much-studied discourse task first characterized linguistically by Linde (1974). |
|
other
<term>
edited texts
</term>
. However ,
|
a
|
great deal of
<term>
natural language texts
|
#12976
However, a great deal of natural language texts e.g., memos, rough drafts, conversation transcripts etc., have features that differ significantly from neat texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic construction, missing periods, etc. |