|
<term>
Oral communication
</term>
is ubiquitous and carries important information yet it is
also
time consuming to document .
|
#11
Oral communication is ubiquitous and carries important information yet it is also time consuming to document. |
|
We
also
report results of a preliminary ,
<term>
qualitative user evaluation
</term>
of the
<term>
system
</term>
, which while broadly positive indicates further work needs to be done on the
<term>
interface
</term>
to make
<term>
users
</term>
aware of the increased potential of
<term>
IE-enhanced text browsers
</term>
.
|
#345
We also report results of a preliminary, qualitative user evaluation of the system, which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers. |
|
<term>
Requestors
</term>
can
also
instruct the
<term>
system
</term>
to notify them when the status of a
<term>
request
</term>
changes or when a
<term>
request
</term>
is complete .
|
#866
Requestors can also instruct the system to notify them when the status of a request changes or when a request is complete. |
|
The paper
also
proposes
<term>
rule-reduction algorithm
</term>
applying
<term>
mutual information
</term>
to reduce the
<term>
error-correction rules
</term>
.
|
#1264
The paper also proposes rule-reduction algorithm applying mutual information to reduce the error-correction rules. |
|
We
also
provide evidence that our findings are scalable .
|
#1588
We also provide evidence that our findings are scalable. |
|
In order to perform an exhaustive comparison , we
also
evaluate a
<term>
hand-crafted template-based generation component
</term>
, two
<term>
rule-based sentence planners
</term>
, and two
<term>
baseline sentence planners
</term>
.
|
#2080
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners. |
|
In this paper , we show how
<term>
training data
</term>
can be supplemented with
<term>
text
</term>
from the
<term>
web
</term>
filtered to match the
<term>
style
</term>
and/or
<term>
topic
</term>
of the target
<term>
recognition task
</term>
, but
also
that it is possible to get bigger performance gains from the
<term>
data
</term>
by using
<term>
class-dependent interpolation
</term>
of
<term>
N-grams
</term>
.
|
#3059
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
We
also
introduce a new way of automatically identifying
<term>
predicate argument structures
</term>
, which is central to our
<term>
IE paradigm
</term>
.
|
#3730
We also introduce a new way of automatically identifying predicate argument structures, which is central to our IE paradigm. |
|
Our analysis
also
highlights the importance of the issue of
<term>
domain dependence
</term>
in evaluating
<term>
WSD programs
</term>
.
|
#4917
Our analysis also highlights the importance of the issue of domain dependence in evaluating WSD programs. |
|
We
also
investigate the reason for that difference .
|
#5141
We also investigate the reason for that difference. |
|
Under this framework , a
<term>
joint source-channel transliteration model
</term>
,
also
called
<term>
n-gram transliteration model ( ngram TM )
</term>
, is further proposed to model the
<term>
transliteration process
</term>
.
|
#5783
Under this framework, a joint source-channel transliteration model, also called n-gram transliteration model (ngram TM), is further proposed to model the transliteration process. |
|
Our study reveals that the proposed method not only reduces an extensive system development effort but
also
improves the
<term>
transliteration accuracy
</term>
significantly .
|
#5835
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
|
Our method takes advantage of the different way in which
<term>
word senses
</term>
are lexicalised in
<term>
English
</term>
and
<term>
Chinese
</term>
, and
also
exploits the large amount of
<term>
Chinese text
</term>
available in
<term>
corpora
</term>
and on the
<term>
Web
</term>
.
|
#6978
Our method takes advantage of the different way in which word senses are lexicalised in English and Chinese, and also exploits the large amount of Chinese text available in corpora and on the Web. |
|
A
<term>
statistical translation model
</term>
is
also
presented that deals such
<term>
phrases
</term>
, as well as a
<term>
training method
</term>
based on the maximization of
<term>
translation accuracy
</term>
, as measured with the
<term>
NIST evaluation metric
</term>
.
|
#7375
A statistical translation model is also presented that deals such phrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric. |
|
It has
also
successfully been coupled with
<term>
rule-based and example based machine translation modules
</term>
to build a
<term>
multi engine machine translation system
</term>
.
|
#8170
It has also successfully been coupled with rule-based and example based machine translation modules to build a multi engine machine translation system. |
|
This piece of work has
also
laid a foundation for exploring and harvesting
<term>
English-Chinese bitexts
</term>
in a larger volume from the
<term>
Web
</term>
.
|
#8298
This piece of work has also laid a foundation for exploring and harvesting English-Chinese bitexts in a larger volume from the Web. |
|
We
also
introduce a novel
<term>
classification method
</term>
based on
<term>
PER
</term>
which leverages
<term>
part of speech information
</term>
of the
<term>
words
</term>
contributing to the
<term>
word matches and non-matches
</term>
in the
<term>
sentence
</term>
.
|
#8368
We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence. |
|
The article
also
introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity of the feature space
</term>
in the
<term>
parsing data
</term>
.
|
#8863
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
|
We
also
show that a good-quality
<term>
MT system
</term>
can be built from scratch by starting with a very small
<term>
parallel corpus
</term>
( 100,000
<term>
words
</term>
) and exploiting a large
<term>
non-parallel corpus
</term>
.
|
#9069
We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. |
|
Second , we describe the
<term>
graphical model
</term>
for the
<term>
machine translation task
</term>
, which can
also
be viewed as a
<term>
stochastic tree-to-tree transducer
</term>
.
|
#9489
Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. |