|
</term>
---
<term>
reference problems
</term>
---
|
by
|
describing a case study and techniques
|
#14523
This paper highlights a particular class of miscommunication --- reference problems --- by describing a case study and techniques for avoiding failures of reference. |
|
the ideas of Pereira and Shieber [ 11 ] ,
|
by
|
providing an interpretation for values
|
#14733
This semantics for feature structures extends the ideas of Pereira and Shieber [11], by providing an interpretation for values which are specified by disjunctions and path values embedded within disjunctions. |
|
enhances
<term>
Criterion
</term>
's capability ,
|
by
|
evaluating multiple aspects of
<term>
coherence
|
#6681
We describe a new system that enhances Criterion's capability, by evaluating multiple aspects of coherence in essays. |
|
increasingly common speculative claim ,
|
by
|
evaluating a representative
<term>
Chinese-to-English
|
#7800
We present the first known empirical test of an increasingly common speculative claim, by evaluating a representative Chinese-to-English SMT model directly on word sense disambiguation performance, using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task. |
|
principle-and-parameters language framework
</term>
. First ,
|
by
|
investigating the combinatorics of
<term>
|
#17363
First, by investigating the combinatorics of free indexation, we show that the problem of enumerating all possible indexings requires exponential time. |
|
fragments
</term>
are reached . At that point ,
|
by
|
virtue of the fact that both a left and
|
#15609
At that point, by virtue of the fact that both a left and a right context were found, heuristics can be introduced that predict the nature of the missing fragments. |
|
disambiguation
</term>
is raised from 46.0 % to 60.62 %
|
by
|
using this novel approach .
<term>
Graph
|
#17940
The accuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach. |
|
<term>
accuracy
</term>
that can be achieved
|
by
|
the
<term>
algorithms
</term>
, we present
|
#5573
Observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |
|
reduction in the search space
</term>
is achieved
|
by
|
using
<term>
semantic
</term>
rather than
<term>
|
#17712
A further reduction in the search space is achieved by using semantic rather than syntactic categories on the terminal and non-terminal edges, thereby reducing the amount of ambiguity and thus the number of edges, since only edges with a valid semantic interpretation are ever introduced. |
|
</term>
, which resolve
<term>
ambiguities
</term>
|
by
|
indirectly and implicitly using
<term>
maximum
|
#17845
Owing to the problem of insufficient training data and approximation error introduced by the language model, traditional statistical approaches, which resolve ambiguitiesby indirectly and implicitly using maximum likelihood method, fail to achieve high performance in real applications. |
|
placement of
<term>
function words
</term>
, and
|
by
|
<term>
heuristic rules
</term>
that permit
|
#17684
This is facilitated through the use of phrase boundary heuristics based on the placement of function words, and by heuristic rules that permit certain kinds of phrases to be deduced despite the presence of unknown words. |
|
entries
</term>
, constructed automatically
|
by
|
applying a set of
<term>
extraction and conversion
|
#20984
It contains a lexicon with over 90,000 entries, constructed automatically by applying a set of extraction and conversion rules to entries from machine readable dictionaries. |
|
from the
<term>
homophone errors
</term>
caused
|
by
|
the
<term>
KANA-KANJI conversion
</term>
needed
|
#20375
Japanese texts frequently suffer from the homophone errors caused by the KANA-KANJI conversion needed to input the text. |
|
valuable task . We investigate that claim
|
by
|
adopting a simple
<term>
MT-based paraphrasing
|
#10766
We investigate that claim by adopting a simple MT-based paraphrasing technique and evaluating QA system performance on paraphrased questions. |
|
real
<term>
dialogue data
</term>
collected
|
by
|
the
<term>
system
</term>
. We obtained reasonable
|
#4367
Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. |
|
the
<term>
target speaker
</term>
and combined
|
by
|
<term>
averaging
</term>
. Using only 40
<term>
|
#17184
Each reference model is transformed to the space of the target speaker and combined by averaging. |
|
applied to the
<term>
USRs
</term>
computed
|
by
|
<term>
large-scale grammars
</term>
. We evaluate
|
#11173
The algorithm operates on underspecified chart representations which are derived from dominance graphs; it can be applied to the USRs computed by large-scale grammars. |
|
tasks are also taken into consideration
|
by
|
enlarging the
<term>
separation margin
</term>
|
#17903
To make the proposed algorithm robust, the possible variations between the training corpus and the real tasks are also taken into consideration by enlarging the separation margin between the correct candidate and its competing members. |
|
identify important
<term>
contents
</term>
|
by
|
frequency of
<term>
events
</term>
. With relevant
|
#11599
With independent approach, we identify important contentsby frequency of events. |
|
approach , we identify important contents
|
by
|
<term>
PageRank algorithm
</term>
on the
<term>
|
#11612
With relevant approach, we identify important contents by PageRank algorithm on the event map constructed from documents. |