other,9-6-E06-1035,bq |
transcription errors
</term>
inevitable in
<term>
ASR
|
output
|
</term>
have a negative impact on models
|
#10618
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
other,9-4-E06-1035,bq |
<term>
performance
</term>
of using
<term>
ASR
|
output
|
</term>
as opposed to
<term>
human transcription
|
#10519
We then explore the impact on performance of using ASR output as opposed to human transcription. |
|
the
<term>
shared derivation forest
</term>
|
output
|
by a prior
<term>
RCL parser
</term>
for a
|
#1723
The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forestoutput by a prior RCL parser for a suitable superset of L. |
other,16-3-H01-1042,bq |
the
<term>
intelligibility
</term>
of
<term>
MT
|
output
|
</term>
. A
<term>
language learning experiment
|
#626
This, the first experiment in a series of experiments, looks at the intelligibility of MT output. |
|
</term>
of a
<term>
parser
</term>
's multiple
|
output
|
. Some examples of
<term>
paraphrasing
</term>
|
#15726
This paper presents a new interactive disambiguation scheme based on the paraphrasing of a parser's multiple output. |
other,38-1-N03-1018,bq |
through its transformation into the
<term>
noisy
|
output
|
</term>
of an
<term>
OCR system
</term>
. The
|
#2706
In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. |
|
sentence-plan-ranker ( SPR )
</term>
ranks the list of
|
output
|
<term>
sentence plans
</term>
, and then selects
|
#1411
Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-ranked plan. |
other,18-1-P06-4014,bq |
pipeline
</term>
that capitalizes on
<term>
|
output
|
quality
</term>
. The demonstrator embodies
|
#11807
The LOGON MT demonstrator assembles independently valuable general-purpose NLP components into a machine translation pipeline that capitalizes onoutput quality. |
other,12-4-P05-2016,bq |
</term>
and plan to compare our
<term>
system 's
|
output
|
</term>
with a
<term>
benchmark system
</term>
|
#9825
We also refer to an evaluation method and plan to compare our system's output with a benchmark system. |
|
determine whether they believed the sample
|
output
|
to be an
<term>
expert human translation
</term>
|
#727
The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation. |
|
. In this paper we show how two standard
|
outputs
|
from
<term>
information extraction ( IE )
|
#282
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
tech,26-2-N03-1026,bq |
maximum-entropy model
</term>
for
<term>
stochastic
|
output
|
selection
</term>
. Furthermore , we propose
|
#2835
Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. |
measure(ment),6-3-H05-1117,bq |
<term>
methods
</term>
for
<term>
scoring system
|
output
|
</term>
is an impediment to progress in the
|
#7578
The lack of automatic methods for scoring system output is an impediment to progress in the field, which we address with this work. |
other,15-5-N03-1026,bq |
<term>
grammaticality
</term>
of the
<term>
system
|
output
|
</term>
due to the use of a
<term>
constraint-based
|
#2899
Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator. |
|
directly compare commercial
<term>
systems
</term>
|
outputting
|
<term>
unsegmented texts
</term>
with , for
|
#7769
The use of BLEU at the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systemsoutputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs. |
other,21-3-N03-1001,bq |
particular
<term>
domain
</term>
; the
<term>
|
output
|
</term>
of
<term>
recognition
</term>
with this
|
#2276
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; theoutput of recognition with this model is then passed to a phone-string classifier. |
|
</term>
are in
<term>
Arabic
</term>
, and the
|
output
|
<term>
summary
</term>
is in
<term>
English
</term>
|
#7170
We consider the case of multi-document summarization, where the input documents are in Arabic, and the output summary is in English. |
tech,3-7-P05-1067,bq |
<term>
model
</term>
. We evaluate the
<term>
|
outputs
|
</term>
of our
<term>
MT system
</term>
using
|
#9512
We evaluate theoutputs of our MT system using the NIST and Bleu automatic MT evaluation software. |
|
by gathering
<term>
statistics
</term>
on the
|
output
|
of other
<term>
linguistic tools
</term>
.
|
#16664
The scheme was implemented by gathering statistics on the output of other linguistic tools. |
other,16-2-N03-1018,bq |
on
<term>
post-processing
</term>
the
<term>
|
output
|
</term>
of black-box
<term>
OCR systems
</term>
|
#2728
The model is designed for use in error correction, with a focus on post-processing theoutput of black-box OCR systems in order to make it more useful for NLP tasks. |