|
statistical machine translation ( MT )
</term>
,
|
to
|
understand the
<term>
model
</term>
's strengths
|
#9866
The method allows a user to explore a model of syntax-based statistical machine translation (MT), to understand the model's strengths and weaknesses, and to compare it to other MT systems. |
|
Phrasal Lexicon ( DHPL )
</term>
[ Zernik88 ] ,
|
to
|
facilitate
<term>
language acquisition
</term>
|
#15805
We introduced a new linguistic representation, the Dynamic Hierarchical Phrasal Lexicon (DHPL) [Zernik88], to facilitate language acquisition. |
|
confines of
<term>
syntax
</term>
, for instance ,
|
to
|
the task of
<term>
semantic interpretation
|
#16452
The unique properties of tree-adjoining grammars (TAG) present a challenge for the application of TAGs beyond the limited confines of syntax, for instance, to the task of semantic interpretation or automatic translation of natural language. |
|
<term>
machine translation
</term>
, that is ,
|
to
|
make decisions on the basis of
<term>
translation
|
#18095
Currently some attempts are being made to use case-based reasoning in machine translation, that is, to make decisions on the basis of translation examples at appropriate pints in MT. |
|
</term>
of
<term>
human language learners
</term>
,
|
to
|
the
<term>
output
</term>
of
<term>
machine translation
|
#570
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. |
|
be able , after attending this workshop ,
|
to
|
set out building an
<term>
SMT system
</term>
|
#8086
Participants should be able, after attending this workshop, to set out building an SMT system themselves and achieving good baseline results in a short time. |
|
noted I walked : to walk : : I laughed :
|
to
|
laugh ) . But
<term>
computational linguists
|
#5880
The reality of analogies between words is refuted by noone (e.g., I walked is to to walk as I laughed is to to laugh, noted I walked : to walk :: I laughed : to laugh). |
|
laughed is to to laugh , noted I walked :
|
to
|
walk : : I laughed : to laugh ) . But
<term>
|
#5873
The reality of analogies between words is refuted by noone (e.g., I walked is to to walk as I laughed is to to laugh, noted I walked : to walk :: I laughed : to laugh). |
|
Lexical-Functional Grammars ( LFG )
</term>
|
to
|
the domain of
<term>
sentence condensation
|
#2801
We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG)to the domain of sentence condensation. |
|
methods ( BLEU , NIST , WER and PER )
</term>
|
to
|
building
<term>
classifiers
</term>
to predict
|
#8357
This paper investigates the utility of applying standard MT evaluation methods (BLEU, NIST, WER and PER)to building classifiers to predict semantic equivalence and entailment. |
|
machine-readable dictionaries ( MRD 's )
</term>
|
to
|
create a
<term>
broad coverage lexicon
</term>
|
#16008
We have drawn primarily on explicit and implicit information from machine-readable dictionaries (MRD's)to create a broad coverage lexicon. |
|
Unification Categorial Grammar ( UCG )
</term>
|
to
|
the framework of
<term>
Isomorphic Grammars
|
#15012
This paper discusses the application of Unification Categorial Grammar (UCG)to the framework of Isomorphic Grammars for Machine Translation pioneered by Landsbergen. |
|
disambiguation
</term>
is raised from 46.0 %
|
to
|
60.62 % by using this novel approach .
<term>
|
#17937
The accuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach. |
|
<term>
parsing accuracy
</term>
rate from 60 %
|
to
|
75 % , a 37 % reduction in error . We discuss
|
#19041
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |
|
terms of both simplicity and efficiency —
|
to
|
work on
<term>
feature selection methods
</term>
|
#8923
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency— to work on feature selection methods within log-linear (maximum-entropy) models. |
|
extraction tasks
</term>
because of its ability
|
to
|
capture arbitrary , overlapping
<term>
features
|
#6844
The information extraction system we evaluate is based on a linear-chain conditional random field (CRF), a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary, overlapping features of the input in a Markov model. |
|
copying
</term>
combined with its ability
|
to
|
handle
<term>
cyclic structures
</term>
without
|
#18037
The proposed scheme eliminates redundant copying while maintaining the quasi-destructive scheme's ability to avoid over copying and early copying combined with its ability to handle cyclic structures without algorithmic additions. |
|
demonstrate the
<term>
model
</term>
's ability
|
to
|
significantly reduce
<term>
character and
|
#2761
We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text. |
|
quasi-destructive scheme 's ability
</term>
|
to
|
avoid
<term>
over copying
</term>
and
<term>
|
#18026
The proposed scheme eliminates redundant copying while maintaining the quasi-destructive scheme's abilityto avoid over copying and early copying combined with its ability to handle cyclic structures without algorithmic additions. |
|
</term>
, were compared in terms of the ability
|
to
|
represent two kinds of
<term>
similarity
</term>
|
#11521
Through two experiments, three methods for constructing word vectors, i.e., LSA-based, cooccurrence-based and dictionary-based methods, were compared in terms of the ability to represent two kinds of similarity, i.e., taxonomic similarity and associative similarity. |