|
creating a
<term>
PC based tool
</term>
to be used
|
in
|
the
<term>
technical abstracting industry
|
#20187
This paper reports on work done for the LRE project SmTA double check, which is creating a PC based tool to be used in the technical abstracting industry. |
|
correcting
<term>
Japanese homophone errors
</term>
|
in
|
<term>
compound nouns
</term>
. This method
|
#20416
This paper proposes a method for detecting and correcting Japanese homophone errorsin compound nouns. |
|
detect
<term>
Japanese homophone errors
</term>
|
in
|
<term>
compound nouns
</term>
, but also can
|
#20429
This method can not only detect Japanese homophone errorsin compound nouns, but also can find the correct candidates for the detected errors automatically. |
|
determines that a
<term>
homophone
</term>
is misused
|
in
|
a
<term>
compound noun
</term>
if one or both
|
#20493
The method accurately determines that a homophone is misused in a compound noun if one or both of its neighbors is not a member of the semantic set defined by the homophone. |
|
wrongly substituted , deleted or inserted
|
in
|
a
<term>
Japanese bunsetsu
</term>
and an
<term>
|
#20660
In order to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese bunsetsu and an English word, and to correct these errors, this paper proposes new methods using m-th order Markov chain model for Japanese kanji-kana characters and English alphabets, assuming that Markov probability of a correct chain of syllables or kanji-kana characters is greater than that of erroneous chains. |
|
detecting as well as correcting these errors
|
in
|
<term>
Japanese bunsetsu
</term>
and
<term>
English
|
#20738
From the results of the experiments, it is concluded that the methods is useful for detecting as well as correcting these errors in Japanese bunsetsu and English words. |
|
on
<term>
typed feature structures
</term>
,
|
in
|
order to support linking of
<term>
lexical
|
#20763
This paper describes the enhancements made, within a unification framework, based on typed feature structures, in order to support linking of lexical entries to their translation equivalents. |
|
semantic domains
</term>
, have been developed
|
in
|
order to generate
<term>
lexical cross-relations
|
#20802
Several experiments, corresponding to rather closed semantic domains, have been developed in order to generate lexical cross-relations between English and Spanish. |
|
instantiation
</term>
of
<term>
clauses
</term>
|
in
|
a program roughly corresponding to
<term>
|
#20905
It is also a drastic generalization of chart Parsing, partial instantiation of clausesin a program roughly corresponding to arcs in a chart. |
|
roughly corresponding to
<term>
arcs
</term>
|
in
|
a
<term>
chart
</term>
.
<term>
Chart-like parsing
|
#20912
It is also a drastic generalization of chart Parsing, partial instantiation of clauses in a program roughly corresponding to arcsin a chart. |
|
The
<term>
parser
</term>
has been implemented
|
in
|
<term>
C++
</term>
and runs on
<term>
SUN Sparcstations
|
#20963
The parser has been implemented in C++ and runs on SUN Sparcstations with X-windows. |
|
corresponding
<term>
parser
</term>
can be specified
|
in
|
terms of
<term>
event type networks
</term>
|
#21085
In particular, we here elaborate on principles of how the global behavior of a lexically distributed grammar and its corresponding parser can be specified in terms of event type networks and event networks, resp. |
|
adapted
<term>
parsing strategies
</term>
,
|
in
|
a similar fashion to the
<term>
SYSCONJ system
|
#21155
Despite the large amount of theoretical work done on non-constituent coordination during the last two decades, many computational systems still treat coordination using adapted parsing strategies, in a similar fashion to the SYSCONJ system developed for ATNs. |
|
be described formally and declaratively
|
in
|
terms of
<term>
Dynamic Grammars
</term>
.
|
#21205
Finally, it shows how processing accounts can be described formally and declaratively in terms of Dynamic Grammars. |
|
capture
<term>
long distance constraints
</term>
|
in
|
a
<term>
sentence
</term>
or
<term>
paragraph
|
#21226
This paper introduces a simple mixture language model that attempts to capture long distance constraintsin a sentence or paragraph. |
|
</term>
, experiments show a 7 % improvement
|
in
|
<term>
recognition accuracy
</term>
with the
|
#21275
Using the BU recognition system, experiments show a 7% improvement in recognition accuracy with the mixture trigram models as compared to using a trigram model. |
|
word matchings
</term>
, are also factored
|
in
|
by modifying the
<term>
transition probabilities
|
#21353
Other contextual clues, such as editing terms, word fragments, and word matchings, are also factored in by modifying the transition probabilities. |
|
understanding technology
</term>
is implemented
|
in
|
a system called
<term>
IDUS ( Intelligent
|
#21394
Our document understanding technology is implemented in a system called IDUS (Intelligent Document Understanding System), which creates the data for a text retrieval application and the automatic generation of hypertext links. |