#17Oral communication is ubiquitous and carries important information yet it is also time consuming to document. Given the development of storage media and networks one could just record and store a conversation for documentation.
translation output
</term>
. Subjects were
given
a set of up to six extracts of translated
#684Subjects were given a set of up to six extracts of translated newswire text.
translation outputs
</term>
. The subjects were
given
three minutes per extract to determine
#715The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation.
possible
<term>
sentence plans
</term>
for a
given
<term>
text-plan input
</term>
. Second , the
#1396First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plans for a given text-plan input.
<term>
wide coverage English grammar
</term>
are
given
. While
<term>
paraphrasing
</term>
is critical
#1752The results of a practical evaluation of this method on a wide coverage English grammar are given.
<term>
off-the-shelf classifiers
</term>
to
give
<term>
utterance classification performance
#2235The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
</term>
as either coherent or incoherent (
given
a
<term>
baseline
</term>
of 54.55 % ) . We
#2532An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent ( given a baseline of 54.55%).
together , the resulting
<term>
tagger
</term>
gives
a 97.24 %
<term>
accuracy
</term>
on the
<term>
#2988Using these ideas together, the resulting taggergives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
inflow of multilingual , multimedia data . It
gives
users the ability to spend their time finding
#3606It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology.
finding more data relevant to their task , and
gives
them translingual reach into other
<term>
#3623It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology.
likely
<term>
answer candidates
</term>
from the
given
<term>
text corpus
</term>
. The operation
#3681The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus.
genre
</term>
. Examples and results will be
given
for
<term>
Arabic
</term>
, but the approach
#4517Examples and results will be given for Arabic, but the approach is applicable to any language that needs affix removal.
probable
<term>
morpheme sequence
</term>
for a
given
<term>
input
</term>
. The
<term>
language model
#4688The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input.
performance of a
<term>
summarizer
</term>
, at times
giving
it a significant lead over
<term>
non-Bayesian
#5421It is found that the Bayesian approach generally leverages performance of a summarizer, at times giving it a significant lead over non-Bayesian models.
reranking
</term>
. The
<term>
model
</term>
gives
an
<term>
F-measure improvement
</term>
of
#5529The modelgives an F-measure improvement of [?] 1.25% beyond the base parser, and an [?] 0.25% improvement beyond the Collins (2000) reranker.
evaluations have shown , that
<term>
SMT
</term>
gives
competitive results to
<term>
rule-based
#6777Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMTgives competitive results to rule-based translation systems, requiring significantly less development time.
domains
</term>
. This workshop is intended to
give
an introduction to
<term>
statistical machine
#6812This workshop is intended to give an introduction to statistical machine translation with a focus on practical considerations.
extent
<term>
entailment
</term>
. Our technique
gives
a substantial improvement in
<term>
paraphrase
#7472Our technique gives a substantial improvement in paraphrase classification accuracy over all of the other models used in the experiments.
described . Moreover , some examples are
given
that underline the necessity of integrating
#7830Moreover, some examples are given that underline the necessity of integrating some kind of information other than grammar sensu stricto into the treebank.
<term>
maximum entropy classifier
</term>
that ,
given
a
<term>
pair of sentences
</term>
, can reliably
#8371We train a maximum entropy classifier that, given a pair of sentences, can reliably determinewhether or not they are translations of each other.