|
<term>
information retrieval techniques
</term>
|
use
|
a
<term>
histogram
</term>
of
<term>
keywords
|
#59
Traditional information retrieval techniquesuse a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. |
|
component performance
</term>
. We describe our
|
use
|
of this approach in numerous fielded
<term>
|
#1226
We describe our use of this approach in numerous fielded user studies conducted with the U.S. military. |
|
natural language
</term>
, current systems
|
use
|
manual or semi-automatic methods to collect
|
#1768
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
|
</term>
. The
<term>
model
</term>
is designed for
|
use
|
in
<term>
error correction
</term>
, with a
|
#2717
The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
|
selection
</term>
. Furthermore , we propose the
|
use
|
of standard
<term>
parser evaluation methods
|
#2843
Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems. |
|
the
<term>
system output
</term>
due to the
|
use
|
of a
<term>
constraint-based parser/generator
|
#2903
Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator. |
|
demonstrates the following ideas : ( i ) explicit
|
use
|
of both preceding and following
<term>
tag
|
#2925
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
|
network representation
</term>
, ( ii ) broad
|
use
|
of
<term>
lexical features
</term>
, including
|
#2943
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
|
consecutive words
</term>
, ( iii ) effective
|
use
|
of
<term>
priors
</term>
in
<term>
conditional
|
#2960
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
|
small-sized
<term>
bilingual corpus
</term>
, we
|
use
|
an out-of-domain
<term>
bilingual corpus
</term>
|
#3096
In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, the language model of an in-domain monolingual corpus. |
|
</term>
. During
<term>
decoding
</term>
, we
|
use
|
a
<term>
block unigram model
</term>
and a
<term>
|
#3432
During decoding, we use a block unigram model and a word-based trigram language model. |
|
</term>
. Unlike conventional methods that
|
use
|
<term>
hand-crafted rules
</term>
, the proposed
|
#4238
Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process. |
|
segmentation
</term><term>
accuracy
</term>
, we
|
use
|
an
<term>
unsupervised algorithm
</term>
for
|
#4713
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
|
probabilistic Horn clauses
</term>
. We then
|
use
|
the
<term>
predicates
</term>
of such
<term>
|
#5184
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
|
eventual objective of this project is to
|
use
|
these
<term>
summaries
</term>
to assist
<term>
|
#5415
The eventual objective of this project is to use these summaries to assist help-desk users and operators. |
|
would not be enough numerous to be of any
|
use
|
. We report experiments conducted on a
<term>
|
#5907
But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. |
|
based on the idea that one person tends to
|
use
|
one
<term>
expression
</term>
for one
<term>
|
#6151
Our approach is based on the idea that one person tends to use one expression for one meaning. |
|
previous
<term>
models
</term>
, which either
|
use
|
arbitrary
<term>
windows
</term>
to compute
|
#6358
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |
|
similarity
</term>
between
<term>
words
</term>
or
|
use
|
<term>
lexical affinity
</term>
to create
<term>
|
#6367
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |
|
summarisation
</term>
. In this paper , we
|
use
|
the
<term>
information redundancy
</term>
in
|
#7130
In this paper, we use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. |