other,16-5-P03-1050,bq |
the approach is applicable to any
<term>
|
language
|
</term>
that needs
<term>
affix removal
</term>
|
#4526
Examples and results will be given for Arabic, but the approach is applicable to anylanguage that needs affix removal. |
model,4-3-P03-1051,bq |
<term>
algorithm
</term>
uses a
<term>
trigram
|
language
|
model
</term>
to determine the most probable
|
#4675
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
other,8-1-C04-1103,bq |
role in many
<term>
multilingual speech and
|
language
|
applications
</term>
. In this paper , a
|
#5740
Machine transliteration/back-transliteration plays an important role in many multilingual speech and language applications. |
other,11-5-C04-1147,bq |
terabyte corpus
</term>
to answer
<term>
natural
|
language
|
tests
</term>
, achieving encouraging results
|
#6429
We apply it in combination with a terabyte corpus to answer natural language tests, achieving encouraging results. |
other,31-3-N04-1022,bq |
parse-trees
</term>
of
<term>
source and target
|
language
|
sentences
</term>
. We report the performance
|
#6609
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |
other,10-2-I05-2014,bq |
scarcely used for the assessment of
<term>
|
language
|
pairs
</term>
like
<term>
English-Chinese
</term>
|
#7710
Yet, they are scarcely used for the assessment oflanguage pairs like English-Chinese or English-Japanese, because of the word segmentation problem. |
other,34-3-I05-2021,bq |
</term>
of the
<term>
words
</term>
in
<term>
source
|
language
|
sentences
</term>
. Surprisingly however
|
#7891
At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences. |
other,14-1-I05-2048,bq |
currently one of the hot spots in
<term>
natural
|
language
|
processing
</term>
. Over the last few years
|
#8002
Statistical machine translation (SMT) is currently one of the hot spots in natural language processing. |
tech,8-12-J05-1003,bq |
experiments in this article are on
<term>
natural
|
language
|
parsing ( NLP )
</term>
, the
<term>
approach
|
#8945
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
other,11-6-J05-4003,bq |
can be applied with great benefit to
<term>
|
language
|
pairs
</term>
for which only scarce
<term>
|
#9111
Thus, our method can be applied with great benefit tolanguage pairs for which only scarce resources are available. |
other,15-1-P05-1034,bq |
syntactic information
</term>
in the
<term>
source
|
language
|
</term>
with recent advances in
<term>
phrasal
|
#9217
We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. |
other,20-3-P05-1069,bq |
real-valued features
</term>
( e.g. a
<term>
|
language
|
model score
</term>
) as well as
<term>
binary
|
#9605
We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. alanguage model score) as well as binary features based on the block identities themselves, e.g. block bigram features. |
other,15-3-P05-1074,bq |
how
<term>
paraphrases
</term>
in one
<term>
|
language
|
</term>
can be identified using a
<term>
phrase
|
#9702
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in onelanguage can be identified using a phrase in another language as a pivot. |
other,16-6-E06-1031,bq |
investigated systematically on two different
<term>
|
language
|
pairs
</term>
. The experimental results
|
#10421
The correlation of the new measure with human judgment has been investigated systematically on two differentlanguage pairs. |
other,3-2-N06-2009,bq |
need
</term>
. Finding the preferred
<term>
|
language
|
</term>
for such a
<term>
need
</term>
is a valuable
|
#10752
Finding the preferredlanguage for such a need is a valuable task. |
other,21-3-N06-4001,bq |
context to uncover relationships between
<term>
|
language
|
</term>
and
<term>
behavioral patterns
</term>
|
#10915
As evidence of its usefulness and usability, it has been used successfully in a research context to uncover relationships betweenlanguage and behavioral patterns in two distinct domains: tutorial dialogue (Kumar et al., submitted) and on-line communities (Arguello et al., 2006). |
model,16-3-P06-4011,bq |
the
<term>
Web
</term>
and building a
<term>
|
language
|
model
</term>
of
<term>
abstract moves
</term>
|
#11753
The method involves automatically gathering a large number of abstracts from the Web and building alanguage model of abstract moves. |
other,9-1-P80-1004,bq |
process in
<term>
human understanding of natural
|
language
|
</term>
. This paper discusses a
<term>
method
|
#12453
Interpreting metaphors is an integral and inescapable process in human understanding of natural language. |
tech,1-1-P80-1019,bq |
are also discussed . Current
<term>
natural
|
language
|
interfaces
</term>
have concentrated largely
|
#12529
Current natural language interfaces have concentrated largely on determining the literal meaning of input from their users. |
other,3-1-P80-1026,bq |
interfaces
</term>
. When people use
<term>
natural
|
language
|
</term>
in natural settings , they often
|
#12670
When people use natural language in natural settings, they often use it ungrammatically, missing out or repeating words, breaking-off and restarting, speaking in fragments, etc.. |