|
improve upon this initial
<term>
ranking
</term>
,
|
using
|
additional
<term>
features
</term>
of the
<term>
|
#8701
A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. |
|
<term>
phrases
</term>
while simultaneously
|
using
|
less
<term>
memory
</term>
than is required
|
#9146
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. |
|
efficient
<term>
decoder
</term>
and show that
|
using
|
these
<term>
tree-based models
</term>
in combination
|
#9281
We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser. |
|
outputs
</term>
of our
<term>
MT system
</term>
|
using
|
the
<term>
NIST and Bleu automatic MT evaluation
|
#9517
We evaluate the outputs of our MT systemusing the NIST and Bleu automatic MT evaluation software. |
|
</term>
. We show that this task can be done
|
using
|
<term>
bilingual parallel corpora
</term>
,
|
#9675
We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. |
|
one
<term>
language
</term>
can be identified
|
using
|
a
<term>
phrase
</term>
in another language
|
#9706
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. |
|
bilingual parallel corpus
</term>
to be ranked
|
using
|
<term>
translation probabilities
</term>
,
|
#9733
We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. |
|
paraphrase extractio and ranking methods
</term>
|
using
|
a set of
<term>
manual word alignments
</term>
|
#9759
We evaluate our paraphrase extractio and ranking methodsusing a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments. |
|
sense per collocation observation
</term>
by
|
using
|
triplets of
<term>
words
</term>
instead of
|
#10158
This approach differs from other approaches to WSI in that it enhances the effect of the one sense per collocation observation by using triplets of words instead of pairs. |
|
<term>
two-step clustering process
</term>
|
using
|
<term>
sentence co-occurrences
</term>
as
<term>
|
#10173
The combination with a two-step clustering processusing sentence co-occurrences as features allows for accurate results. |
|
four-participants face-to-face meetings
</term>
|
using
|
<term>
Bayesian Network
</term>
and
<term>
Naive
|
#10245
We present results on addressee identification in four-participants face-to-face meetingsusing Bayesian Network and Naive Bayes classifiers. |
|
the impact on
<term>
performance
</term>
of
|
using
|
<term>
ASR output
</term>
as opposed to
<term>
|
#10517
We then explore the impact on performance of using ASR output as opposed to human transcription. |
|
a
<term>
token classification task
</term>
,
|
using
|
various
<term>
tagging strategies
</term>
to
|
#10813
There are several approaches that model information extraction as a token classification task, using various tagging strategies to combine multiple tokens. |
|
automatically from
<term>
raw text
</term>
. Experiments
|
using
|
the
<term>
SemCor
</term>
and
<term>
Senseval-3
|
#11025
Experiments using the SemCor and Senseval-3 data sets demonstrate that our ensembles yield significantly better results when compared with state-of-the-art. |
|
In this paper , we describe the research
|
using
|
<term>
machine learning techniques
</term>
|
#11208
In this paper, we describe the research using machine learning techniques to build a comma checker to be integrated in a grammar checker for Basque. |
|
shown that these results can be improved
|
using
|
a bigger and a more homogeneous
<term>
corpus
|
#11293
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |
|
</term>
and
<term>
linguistic pattern
</term>
. By
|
using
|
them , we can automatically extract such
|
#11442
By using them, we can automatically extract such sentences that express opinion. |
|
Path-based inference rules
</term>
may be written
|
using
|
a
<term>
binary relational calculus notation
|
#12099
Path-based inference rules may be written using a binary relational calculus notation. |
|
constructed in a
<term>
semantic network
</term>
|
using
|
a variant of a
<term>
predicate calculus
|
#12139
Node-based inference rules can be constructed in a semantic networkusing a variant of a predicate calculus notation. |
|
the sum of each
<term>
character
</term>
. By
|
using
|
commands or
<term>
rules
</term>
which are
|
#12312
By using commands or rules which are defined to facilitate the construction of format expected or some mathematical expressions, elaborate and pretty documents can be successfully obtained. |