model,11-1-H01-1058,bq |
problem of combining several
<term>
language
|
models
|
( LMs )
</term>
. We find that simple
<term>
|
#1039
In this paper, we address the problem of combining several language models (LMs). |
model,7-1-H01-1070,bq |
practical approach employing
<term>
n-gram
|
models
|
</term>
and
<term>
error-correction rules
</term>
|
#1249
This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification. |
model,31-2-P01-1004,bq |
a range of
<term>
local segment contiguity
|
models
|
</term>
( in the form of
<term>
N-grams
</term>
|
#1522
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
model,24-3-P01-1004,bq |
superior to any of the tested
<term>
word N-gram
|
models
|
</term>
. Further , in their optimum
<term>
|
#1557
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
other,14-1-P01-1070,bq |
on the construction of
<term>
statistical
|
models
|
</term>
of
<term>
WH-questions
</term>
. These
|
#2139
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. |
model,1-2-P01-1070,bq |
of
<term>
WH-questions
</term>
. These
<term>
|
models
|
</term>
, which are built from
<term>
shallow
|
#2144
Thesemodels, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals. |
other,11-3-P01-1070,bq |
predictive performance
</term>
of our
<term>
|
models
|
</term>
, including the influence of various
|
#2181
We report on different aspects of the predictive performance of ourmodels, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables. |
model,3-2-N03-1001,bq |
combines
<term>
domain independent acoustic
|
models
|
</term>
with off-the-shelf
<term>
classifiers
|
#2229
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
model,21-1-N03-1017,bq |
previously proposed
<term>
phrase-based translation
|
models
|
</term>
. Within our framework , we carry
|
#2562
We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. |
model,18-2-N03-1017,bq |
better and explain why
<term>
phrase-based
|
models
|
</term>
outperform
<term>
word-based models
|
#2583
Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models. |
model,21-2-N03-1017,bq |
models
</term>
outperform
<term>
word-based
|
models
|
</term>
. Our empirical results , which hold
|
#2586
Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models. |
model,12-4-N03-1017,bq |
<term>
high-accuracy word-level alignment
|
models
|
</term>
does not have a strong impact on
|
#2645
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
tech,9-3-N03-1018,bq |
<term>
model
</term>
based on
<term>
finite-state
|
models
|
</term>
, demonstrate the
<term>
model
</term>
|
#2754
We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text. |
model,55-1-N03-1033,bq |
priors
</term>
in
<term>
conditional loglinear
|
models
|
</term>
, and ( iv ) fine-grained modeling
|
#2966
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
model,25-1-N03-2036,bq |
parameters
</term>
than similar
<term>
phrase-based
|
models
|
</term>
. The
<term>
units of translation
</term>
|
#3415
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
model,8-3-P03-1033,bq |
we set up three dimensions of
<term>
user
|
models
|
</term>
:
<term>
skill level
</term>
to the
<term>
|
#4331
Specifically, we set up three dimensions of user models: skill level to the system, knowledge level on the target domain and the degree of hastiness. |
model,3-4-P03-1033,bq |
<term>
hastiness
</term>
. Moreover , the
<term>
|
models
|
</term>
are automatically derived by
<term>
|
#4354
Moreover, themodels are automatically derived by decision tree learning using real dialogue data collected by the system. |
model,9-1-C04-1147,bq |
fast computation of
<term>
lexical affinity
|
models
|
</term>
. The framework is composed of a
|
#6319
We present a framework for the fast computation of lexical affinity models. |
model,4-3-C04-1147,bq |
</term>
. In comparison with previous
<term>
|
models
|
</term>
, which either use arbitrary
<term>
|
#6354
In comparison with previousmodels, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |
model,22-3-C04-1147,bq |
affinity
</term>
to create
<term>
sequential
|
models
|
</term>
, in this paper we focus on
<term>
|
#6373
In comparison with previous models, which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models, in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus. |