lr,6-1-H92-1003,bq |
describes a recently collected
<term>
spoken
|
language
|
corpus
</term>
for the
<term>
ATIS ( Air Travel
|
#18531
This paper describes a recently collected spoken language corpus for the ATIS (Air Travel Information System) domain. |
model,10-2-H92-1016,bq |
modelling
</term>
, the use of a
<term>
bigram
|
language
|
model
</term>
in conjunction with a
<term>
|
#18720
These include context-dependent phonetic modelling, the use of a bigram language model in conjunction with a probabilistic LR parser, and refinements made to the lexicon. |
model,11-1-H01-1058,bq |
address the problem of combining several
<term>
|
language
|
models ( LMs )
</term>
. We find that simple
|
#1038
In this paper, we address the problem of combining severallanguage models (LMs). |
model,11-3-N03-2036,bq |
model
</term>
and a
<term>
word-based trigram
|
language
|
model
</term>
. During
<term>
training
</term>
|
#3441
During decoding, we use a block unigram model and a word-based trigram language model. |
model,14-2-C92-1055,bq |
approximation error
</term>
introduced by the
<term>
|
language
|
model
</term>
, traditional
<term>
statistical
|
#17835
Owing to the problem of insufficient training data and approximation error introduced by thelanguage model, traditional statistical approaches, which resolve ambiguities by indirectly and implicitly using maximum likelihood method, fail to achieve high performance in real applications. |
model,16-3-P06-4011,bq |
the
<term>
Web
</term>
and building a
<term>
|
language
|
model
</term>
of
<term>
abstract moves
</term>
|
#11753
The method involves automatically gathering a large number of abstracts from the Web and building alanguage model of abstract moves. |
model,28-1-N03-2006,bq |
corpus
</term>
and , in addition , the
<term>
|
language
|
model
</term>
of an in-domain
<term>
monolingual
|
#3107
In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, thelanguage model of an in-domain monolingual corpus. |
model,3-1-H92-1026,bq |
generative probabilistic model of natural
|
language
|
</term>
, which we call
<term>
HBG
</term>
,
|
#18901
We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity. |
model,4-3-P03-1051,bq |
<term>
algorithm
</term>
uses a
<term>
trigram
|
language
|
model
</term>
to determine the most probable
|
#4675
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
model,6-1-H94-1014,bq |
paper introduces a simple mixture
<term>
|
language
|
model
</term>
that attempts to capture
<term>
|
#21217
This paper introduces a simple mixturelanguage model that attempts to capture long distance constraints in a sentence or paragraph. |
other,10-1-C94-1030,bq |
speech recognition
</term>
of a
<term>
natural
|
language
|
</term>
, it has been difficult to detect
|
#20624
In optical character recognition and continuous speech recognition of a natural language, it has been difficult to detect error characters which are wrongly deleted and inserted. |
other,10-2-I05-2014,bq |
scarcely used for the assessment of
<term>
|
language
|
pairs
</term>
like
<term>
English-Chinese
</term>
|
#7710
Yet, they are scarcely used for the assessment oflanguage pairs like English-Chinese or English-Japanese, because of the word segmentation problem. |
other,10-5-P01-1007,bq |
of the
<term>
main parser
</term>
for a
<term>
|
language
|
L
</term>
are directed by a
<term>
guide
</term>
|
#1710
The non-deterministic parsing choices of the main parser for alanguage L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
other,11-1-A92-1027,bq |
structure parsing
</term>
of
<term>
natural
|
language
|
</term>
that is tailored to the problem of
|
#17555
We present an efficient algorithm for chart-based phrase structure parsing of natural language that is tailored to the problem of extracting specific information from unrestricted texts where many of the words are unknown and much of the text is irrelevant to the task. |
other,11-4-N03-1001,bq |
evaluated on three different
<term>
spoken
|
language
|
system domains
</term>
. Motivated by the
|
#2302
The classification accuracy of the method is evaluated on three different spoken language system domains. |
other,11-5-C04-1147,bq |
terabyte corpus
</term>
to answer
<term>
natural
|
language
|
tests
</term>
, achieving encouraging results
|
#6429
We apply it in combination with a terabyte corpus to answer natural language tests, achieving encouraging results. |
other,11-6-J05-4003,bq |
can be applied with great benefit to
<term>
|
language
|
pairs
</term>
for which only scarce
<term>
|
#9111
Thus, our method can be applied with great benefit tolanguage pairs for which only scarce resources are available. |
other,12-1-A94-1017,bq |
( APs )
</term>
for
<term>
real-time spoken
|
language
|
translation
</term>
.
<term>
Spoken language
|
#20207
This paper proposes a model using associative processors (APs) for real-time spoken language translation. |
other,12-3-C92-4207,bq |
<term>
SPRINT
</term>
, which takes
<term>
natural
|
language
|
texts
</term>
and produces a
<term>
model
</term>
|
#18443
It is done by an experimental computer program SPRINT, which takes natural language texts and produces a model of the described world. |
other,13-1-J86-4002,bq |
human-machine interactions
</term>
in a
<term>
natural
|
language
|
environment
</term>
. Because a
<term>
speaker
|
#14407
The goal of this work is the enrichment of human-machine interactions in a natural language environment. |