#200Despite the small size of the databases used some results about the effectiveness of these indices can be obtained.
translation ( MT ) systems
</term>
. We believe that
these
<term>
evaluation techniques
</term>
will provide
#584We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems.
other ( than ) , such ( as ) , and besides .
These
<term>
words
</term>
appear frequently enough
#1847This paper presents a formal analysis for a large class of words called alternative markers, which includes other (than), such (as), and besides. These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them.
models
</term>
of
<term>
WH-questions
</term>
.
These
<term>
models
</term>
, which are built from
#2144We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. These models, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals.
<term>
unknown word features
</term>
. Using
these
ideas together , the resulting
<term>
tagger
#2981Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
recall
</term>
on both systems . Motivated by
these
arguments , we introduce a number of new
#4095Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists.
<term>
negative feedback
</term>
. Based on
these
results , we present an
<term>
ECA
</term>
#5091Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state.
model
</term>
learns to automatically make
these
assignments based on a
<term>
discriminative
#5476The model learns to automatically make these assignments based on a discriminative training criterion.
literature on
<term>
machine translation
</term>
.
These
<term>
models
</term>
can be viewed as pairs
#5701This paper investigates some computational problems associated with probabilistic translation models that have recently been adopted in the literature on machine translation. These models can be viewed as pairs of probabilistic context-free grammars working in a 'synchronous' way.
define an initial
<term>
ranking
</term>
of
these
<term>
parses
</term>
. A second
<term>
model
#8051The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
features
</term>
, without concerns about how
these
<term>
features
</term>
interact or overlap
#8100The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
<term>
generative model
</term>
which takes
these
<term>
features
</term>
into account . We introduce
#8119The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
efficient
<term>
decoder
</term>
and show that using
these
<term>
tree-based models
</term>
in combination
#8920We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser.
</term>
respectively , and then interpolate
these
two
<term>
models
</term>
to improve the
<term>
#9744In this paper, we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively, and then interpolate these two models to improve the domain-specific word alignment.
incorporating novel
<term>
features
</term>
that model
these
interactions into
<term>
discriminative log-linear
#10105We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative log-linear models.
polynomial time solution
</term>
for any of
these
<term>
hard problems
</term>
( unless P = NP
#10982Since it is unlikely that there exists a polynomial time solution for any of these hard problems (unless P = NP and P#P = P), our results highlight and justify the need for developing polynomial time approximations for these computations.
polynomial time approximations
</term>
for
these
computations . We also discuss some practical
#11010Since it is unlikely that there exists a polynomial time solution for any of these hard problems (unless P = NP and P#P = P), our results highlight and justify the need for developing polynomial time approximations for these computations.
is sufficiently general to be applied to
these
diverse problems , discuss its application
#11653We describe a clustering algorithm which is sufficiently general to be applied to these diverse problems, discuss its application, and evaluate its performance.
commas
</term>
. Finally , we have shown that
these
results can be improved using a bigger
#12225Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author.
interconnected sets of
<term>
subpredicates
</term>
.
These
<term>
subpredicates
</term>
may be thought
#12810In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. These subpredicates may be thought of as the almost inevitable inferences that a listener makes when a verb is used in a sentence.