#2619Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning ofphrase translations from word-based alignments and lexical weighting of phrase translations.
other,39-3-N03-1017,ak
and
<term>
lexical weighting
</term>
of
<term>
phrase
translations
</term>
. Surprisingly , learning
#2628Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting ofphrase translations.
#2634Surprisingly, learningphrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance.
other,10-4-N03-1017,ak
three
<term>
words
</term>
and learning
<term>
phrases
</term>
from
<term>
high-accuracy word-level
#2641Surprisingly, learning phrases longer than three words and learningphrases from high-accuracy word-level alignment models does not have a strong impact on performance.
other,2-5-N03-1017,ak
Learning only
<term>
syntactically motivated
phrases
</term>
degrades the performance of our systems
#2660Learning only syntactically motivated phrases degrades the performance of our systems.
other,3-2-N03-2017,ak
</term>
. It requires disjoint
<term>
English
phrases
</term>
to be mapped to non-overlapping intervals
#3248It requires disjoint English phrases to be mapped to non-overlapping intervals in the French sentence.
other,9-2-N03-2036,ak
</term>
are
<term>
blocks
</term>
- pairs of
<term>
phrases
</term>
. During
<term>
decoding
</term>
, we
#3427The units of translation are blocks - pairs ofphrases.
other,13-5-N03-2036,ak
based on
<term>
unigram counts
</term>
and
<term>
phrase
length
</term>
. In this paper , we propose
#3475We show experimental results on block selection criteria based on unigram counts andphrase length.
other,26-5-H05-1005,ak
redundancy
</term>
, focusing on
<term>
noun
phrases
</term>
. This paper presents a
<term>
maximum
#5273We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases.
other,12-1-H05-1095,ak
method
</term>
, based on
<term>
non-contiguous
phrases
</term>
, i.e.
<term>
phrases
</term>
with gaps
#5594This paper presents a phrase-based statistical machine translation method, based on non-contiguous phrases, i.e. phrases with gaps.
other,16-1-H05-1095,ak
non-contiguous phrases
</term>
, i.e.
<term>
phrases
</term>
with gaps . A method for producing
#5597This paper presents a phrase-based statistical machine translation method, based on non-contiguous phrases, i.e.phrases with gaps.
other,5-2-H05-1095,ak
gaps . A method for producing such
<term>
phrases
</term>
from a
<term>
word-aligned corpora
</term>
#5606A method for producing suchphrases from a word-aligned corpora is proposed.
other,10-3-H05-1095,ak
</term>
is also presented that deals such
<term>
phrases
</term>
, as well as a
<term>
training method
#5624A statistical translation model is also presented that deals suchphrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric.
other,22-1-P05-1032,ak
retrieval
</term>
of arbitrarily long
<term>
phrases
</term>
while simultaneously using less
<term>
#8781In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily longphrases while simultaneously using less memory than is required by current decoder implementations.
other,12-2-P05-1032,ak
retrieval times
</term>
for looking up
<term>
phrase
translations
</term>
in our
<term>
suffix array-based
#8807We detail the computational complexity and average retrieval times for looking upphrase translations in our suffix array-based data structure.
other,6-3-P05-1053,ak
Our study illustrates that the base
<term>
phrase
chunking information
</term>
is very effective
#9299Our study illustrates that the basephrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement.
tech,8-2-P05-1069,ak
with orientation
</term>
to handle
<term>
local
phrase
re-ordering
</term>
. We use a
<term>
maximum
#9962The model predicts blocks with orientation to handle local phrase re-ordering.
other,21-3-P05-1074,ak
language can be identified using a
<term>
phrase
</term>
in another language as a pivot .
#10197Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using aphrase in another language as a pivot.
other,67-5-E06-1035,ak
conversational cues
</term>
, such as
<term>
cue
phrases
</term>
and
<term>
overlapping speech
</term>
#11531Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task.
lr,4-1-P83-1004,ak
formalisms
</term>
that combine
<term>
context-free
phrase
structure rules and metarules ( MPS grammars
#14719Metagrammatical formalisms that combine context-free phrase structure rules and metarules (MPS grammars) allow concise statement of generalizations about the syntax of natural languages.