#282In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser.
other,18-6-H01-1041,ak
system
</term>
produces the
<term>
translation
output
</term>
sufficient for content understanding
#534Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document.
other,28-1-H01-1042,ak
human language learners
</term>
, to the
<term>
output
</term>
of
<term>
machine translation ( MT
#572The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to theoutput of machine translation (MT) systems.
other,16-3-H01-1042,ak
the
<term>
intelligibility
</term>
of
<term>
MT
output
</term>
. A
<term>
language learning experiment
#626This, the first experiment in a series of experiments, looks at the intelligibility of MT output.
other,16-6-H01-1042,ak
experiment using
<term>
machine translation
output
</term>
. Subjects were given a set of up
#680We tested this to see if similar criteria could be elicited from duplicating the experiment using machine translation output.
other,11-8-H01-1042,ak
</term>
, others were
<term>
machine translation
outputs
</term>
. The subjects were given three minutes
#710Some of the extracts were expert human translations, others were machine translation outputs.
determine whether they believed the sample
output
to be an
<term>
expert human translation
</term>
#727The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation.
other,11-5-N01-1003,ak
sentence-plan-ranker ( SPR )
</term>
ranks the list of
<term>
output
sentence plans
</term>
, and then selects
#1411Second, the sentence-plan-ranker (SPR) ranks the list ofoutput sentence plans, and then selects the top-ranked plan.
the shared
<term>
derivation forest
</term>
output
by a prior
<term>
RCL parser
</term>
for a
#1723The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forestoutput by a prior RCL parser for a suitable superset of L .
other,21-3-N03-1001,ak
particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
#2277In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; theoutput of recognition with this model is then passed to a phone-string classifier.
other,38-1-N03-1018,ak
through its transformation into the
<term>
noisy
output
</term>
of an
<term>
OCR system
</term>
. The
#2707In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system.
other,16-2-N03-1018,ak
on
<term>
post-processing
</term>
the
<term>
output
</term>
of
<term>
black-box OCR systems
</term>
#2729The model is designed for use in error correction, with a focus on post-processing theoutput of black-box OCR systems in order to make it more useful for NLP tasks.
tech,26-2-N03-1026,ak
maximum-entropy model
</term>
for
<term>
stochastic
output
selection
</term>
. Furthermore , we propose
#2836Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection.
other,15-5-N03-1026,ak
<term>
grammaticality
</term>
of the
<term>
system
output
</term>
due to the use of a
<term>
constraint-based
#2900Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator.
input documents are in Arabic , and the
output
summary is in English . Typically , information
#5196We consider the case of multi-document summarization, where the input documents are in Arabic, and the output summary is in English.
tech,3-3-H05-1117,ak
<term>
automatic methods for scoring system
output
</term>
is an impediment to progress in the
#5979The lack of automatic methods for scoring system output is an impediment to progress in the field, which we address with this work.
other,30-2-H05-2007,ak
patterns
</term>
in
<term>
machine translation
output
</term>
. We present a tool , called
<term>
#6077We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems, and demonstrate how our application can be used by developers to explore patterns in machine translation output.
other,20-1-I05-2013,ak
<term>
French
</term>
and produces as
<term>
output
</term>
the same
<term>
text
</term>
in which
#6099We present a tool, called ILIMP, which takes as input a raw text in French and produces asoutput the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive.
possible to directly compare commercial systems
outputting
unsegmented texts with , for instance ,
#6312The use of BLEU at the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs.
systems
</term>
which usually segment their
outputs
. We present the first known empirical
#6327The use of BLEU at the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs.