D12-1038 |
shows the overall procedure of the
|
iterative training
|
method . The loop of lines 6-13
|
D12-1038 |
Predict-Self Reestimation We adopt the
|
iterative training
|
strategy to the baseline annotation
|
D12-1038 |
source classifier . We propose an
|
iterative training
|
procedure to gradually improve
|
D12-1038 |
little worse than that brought by
|
iterative training
|
. Figure 3 shows the performance
|
D11-1128 |
variant of SOUNDEX along with
|
iterative training
|
was proposed by Darwish ( 2010
|
D15-1039 |
of density , together with an
|
iterative training
|
procedure that makes use of these
|
D12-1038 |
reestimation brings improvement to the
|
iterative training
|
at each iteration . The maximum
|
D12-1038 |
two optimization strategies ,
|
iterative training
|
and predict-self reestimation
|
D15-1105 |
source corpus . They observed
|
iterative training
|
to improve training-set perplexity
|
D12-1038 |
corpora of the next iteration . The
|
iterative training
|
terminates when the performance
|
D08-1092 |
The full training setup used the
|
iterative training
|
procedure on all 2298 training
|
D12-1038 |
two optimization strategies ,
|
iterative training
|
and predict-self reestimation
|
D13-1016 |
from the source domain during an
|
iterative training
|
process . Representation learning
|
D12-1038 |
human-annotated knowledge . The
|
iterative training
|
procedure proposed in this work
|
D13-1062 |
two optimization strategies ,
|
iterative training
|
and predict-self re-estimation
|
D12-1038 |
optimization strategies in details . 3.2
|
Iterative Training
|
for Annotation Transformation
|
D08-1039 |
likelihood es - timate , we resort to
|
iterative training
|
via the EM algorithm ( Dempster
|
A00-1014 |
, which were determined by an
|
iterative training
|
procedure using a corpus of transcribed
|
D12-1038 |
two optimization strategies ,
|
iterative training
|
and predict-self reesti - mation
|
D12-1038 |
and predict-self reestimation .
|
Iterative training
|
takes a global view , it conducts
|