|
retrieval accuracy
</term>
, but much faster . We
|
also
|
provide evidence that our findings are
|
#1588
We also provide evidence that our findings are scalable. |
|
can make a fair copy of not only texts but
|
also
|
graphs and tables indispensable to our
|
#12256
In this paper, we report a system FROFF which can make a fair copy of not only texts but also graphs and tables indispensable to our papers. |
|
extensive system development effort but
|
also
|
improves the
<term>
transliteration accuracy
|
#5835
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
|
research
</term>
. This piece of work has
|
also
|
laid a foundation for exploring and harvesting
|
#8298
This piece of work has also laid a foundation for exploring and harvesting English-Chinese bitexts in a larger volume from the Web. |
|
evaluate their relative performances . We
|
also
|
introduce a new strategy , called
<term>
|
#10841
We also introduce a new strategy, called Begin/After tagging or BIA, and show that it is competitive to the best other strategies. |
|
way . This
<term>
generation system
</term>
|
also
|
uses
<term>
disjunctive feature structures
|
#16249
This generation systemalso uses disjunctive feature structures to reduce the number of copies of the derivation tree. |
|
model ’s score
</term>
of 88.2 % . The article
|
also
|
introduces a new
<term>
algorithm
</term>
for
|
#8863
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
|
interface
</term>
for browsing and editing was
|
also
|
designed and implemented . The principle
|
#17330
A customized interface for browsing and editing was also designed and implemented. |
|
</term>
of
<term>
abstract moves
</term>
. We
|
also
|
present a prototype
<term>
concordancer
</term>
|
#11760
We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. |
|
appear cooperative or graceful unless they
|
also
|
incorporate numerous
<term>
non-literal aspects
|
#12569
While such decoding is an essential underpinning, much recent work suggests that natural language interfaces will never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication, such as robust communication procedures. |
|
view of
<term>
language definition
</term>
are
|
also
|
noted . Representative samples from an
<term>
|
#13401
Several advantages from the point of view of language definition are also noted. |
|
field of
<term>
speech processing
</term>
, but
|
also
|
in the related areas of
<term>
Human-Machine
|
#18636
The paper provides an overview of the research conducted at LIMSI in the field of speech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. |
|
machine translation task
</term>
, which can
|
also
|
be viewed as a
<term>
stochastic tree-to-tree
|
#9489
Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. |
|
database
</term>
.
<term>
Requestors
</term>
can
|
also
|
instruct the
<term>
system
</term>
to notify
|
#866
Requestors can also instruct the system to notify them when the status of a request changes or when a request is complete. |
|
carries important information yet it is
|
also
|
time consuming to document . Given the
|
#11
Oral communication is ubiquitous and carries important information yet it is also time consuming to document. |
|
algorithm
</term>
. In addition , it could
|
also
|
be used to help evaluate
<term>
disambiguation
|
#19315
In addition, it could also be used to help evaluate disambiguation algorithms that did not make use of the discourse constraint. |
|
96 % and a
<term>
recall
</term>
of 98 % . It
|
also
|
gets a
<term>
precision
</term>
of 70 % and
|
#11262
It also gets a precision of 70% and a recall of 49% in the task of placing commas. |
|
statistical machine translation system
</term>
. We
|
also
|
show that a good-quality
<term>
MT system
|
#9069
We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. |
|
vital to
<term>
machine translation
</term>
are
|
also
|
discussed together with various interesting
|
#13314
Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation are also discussed together with various interesting ambiguities. |
|
set
</term>
is 49.3 % . Similar results were
|
also
|
obtained on the
<term>
February '92 benchmark
|
#18798
Similar results were also obtained on the February '92 benchmark evaluation. |