tech,11-2-H01-1041,bq |
consists of two
<term>
core modules
</term>
,
<term>
|
language
|
understanding and generation modules
</term>
|
#422
The CCLINC Korean-to-English translation system consists of two core modules,language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
other,4-5-C88-2162,bq |
acquisition
</term>
. From this , a
<term>
|
language
|
learning model
</term>
was implemented in
|
#15814
From this, alanguage learning model was implemented in the program RINA, which enhances its own lexical hierarchy by processing examples in context. |
other,1-4-H01-1042,bq |
</term>
of
<term>
MT output
</term>
. A
<term>
|
language
|
learning experiment
</term>
showed that
<term>
|
#629
Alanguage learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
model,16-3-P06-4011,bq |
the
<term>
Web
</term>
and building a
<term>
|
language
|
model
</term>
of
<term>
abstract moves
</term>
|
#11753
The method involves automatically gathering a large number of abstracts from the Web and building alanguage model of abstract moves. |
tech,19-2-H01-1041,bq |
generation modules
</term>
mediated by a
<term>
|
language
|
neutral meaning representation
</term>
called
|
#430
The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by alanguage neutral meaning representation called a semantic frame. |
other,20-3-P05-1069,bq |
real-valued features
</term>
( e.g. a
<term>
|
language
|
model score
</term>
) as well as
<term>
binary
|
#9605
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. alanguage model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,10-5-P01-1007,bq |
of the
<term>
main parser
</term>
for a
<term>
|
language
|
L
</term>
are directed by a
<term>
guide
</term>
|
#1710
The non-deterministic parsing choices of the main parser for alanguage L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
other,8-1-C04-1103,bq |
role in many
<term>
multilingual speech and
|
language
|
applications
</term>
. In this paper , a
|
#5740
Machine transliteration/back-transliteration plays an important role in many multilingual speech and language applications. |
|
identified using a
<term>
phrase
</term>
in another
|
language
|
as a pivot . We define a
<term>
paraphrase
|
#9711
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. |
other,16-5-P03-1050,bq |
the approach is applicable to any
<term>
|
language
|
</term>
that needs
<term>
affix removal
</term>
|
#4526
Examples and results will be given for Arabic, but the approach is applicable to anylanguage that needs affix removal. |
other,21-3-N06-4001,bq |
context to uncover relationships between
<term>
|
language
|
</term>
and
<term>
behavioral patterns
</term>
|
#10915
As evidence of its usefulness and usability, it has been used successfully in a research context to uncover relationships betweenlanguage and behavioral patterns in two distinct domains: tutorial dialogue (Kumar et al., submitted) and on-line communities (Arguello et al., 2006). |
model,10-2-H92-1016,bq |
modelling
</term>
, the use of a
<term>
bigram
|
language
|
model
</term>
in conjunction with a
<term>
|
#18720
These include context-dependent phonetic modelling, the use of a bigram language model in conjunction with a probabilistic LR parser, and refinements made to the lexicon. |
tech,10-3-H01-1070,bq |
than 99 %
<term>
accuracy
</term>
in both
<term>
|
language
|
identification
</term>
and
<term>
key prediction
|
#1287
Our algorithm reported more than 99% accuracy in bothlanguage identification and key prediction. |
tech,7-3-C88-2162,bq |
linguistic representation
</term>
used by
<term>
|
language
|
processing systems
</term>
is not geared
|
#15777
For another, linguistic representation used bylanguage processing systems is not geared to learning. |
other,15-6-C94-1026,bq |
which are selected from different
<term>
|
language
|
families
</term>
. In
<term>
optical character
|
#20610
Most importantly, the experimental objects are Chinese-English texts, which are selected from differentlanguage families. |
other,16-6-E06-1031,bq |
investigated systematically on two different
<term>
|
language
|
pairs
</term>
. The experimental results
|
#10421
The correlation of the new measure with human judgment has been investigated systematically on two differentlanguage pairs. |
tech,11-5-H01-1058,bq |
clearly show the need for a
<term>
dynamic
|
language
|
model combination
</term>
to improve the
<term>
|
#1143
We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. |
other,11-4-C04-1103,bq |
<term>
English/Chinese and English/Japanese
|
language
|
pairs
</term>
. Our study reveals that the
|
#5816
We evaluate the proposed methods through several transliteration/back transliteration experiments for English/Chinese and English/Japanese language pairs. |
other,4-6-P84-1047,bq |
Representative samples from an
<term>
entity-oriented
|
language
|
definition
</term>
are presented , along
|
#13409
Representative samples from an entity-oriented language definition are presented, along with a control structure for an entity-oriented parser, some parsing strategies that use the control structure, and worked examples of parses. |
other,9-3-N03-1017,bq |
results , which hold for all examined
<term>
|
language
|
pairs
</term>
, suggest that the highest
|
#2597
Our empirical results, which hold for all examinedlanguage pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |