D08-1105 |
maximize the benefits by performing
|
active learning
|
only on the more frequently occurring
|
D08-1105 |
using feature augmentation with
|
active learning
|
. Our results show that this
|
C04-1201 |
is a very time consuming task ,
|
active learning
|
can provide a faster approach
|
D08-1105 |
unlikely event 1007 Figure 2 : The
|
active learning
|
algorithm . that we have access
|
C04-1201 |
future work we plan to investigate
|
active learning
|
with SVM for this problem . Given
|
D08-1112 |
comments . <title> An Analysis of
|
Active Learning
|
Strategies for Sequence Labeling
|
D08-1105 |
various curves stabilize after 35
|
active learning
|
iterations , we only show the
|
D08-1105 |
domain adaptation technique with
|
active learning
|
, we are able to effectively
|
D08-1105 |
our adaptation exam ples during
|
active learning
|
. Hence , we perform active learning
|
D08-1105 |
curves the results of applying
|
active learning
|
only to various sets of word
|
D08-1105 |
types . In contrast , we perform
|
active learning
|
experiments on the hundreds of
|
D08-1112 |
</title> <authors></authors> Abstract
|
Active learning
|
is well-suited to many problems
|
D08-1105 |
by the OntoNotes data . For our
|
active learning
|
experiments , we use the uncertainty
|
D08-1105 |
examples that have been selected via
|
active learning
|
thus far . We then use the AUGMENT
|
D08-1105 |
introduced by Daume III ( 2007 ) , and
|
active learning
|
( Lewis and Gale , 1994 ) to
|
D08-1112 |
aims to shed light on the best
|
active learning
|
approaches for sequence labeling
|
D08-1105 |
examples to annotate , we could use
|
active learning
|
( Lewis and Gale , 1994 ) to
|
D08-1105 |
WSD accuracy of 82.6 % after 10
|
active learning
|
iterations . Note that in Section
|
D08-1105 |
technique during each iteration of
|
active learning
|
to combine the SEMCOR examples
|
D08-1105 |
select examples to annotate via
|
active learning
|
. Also , since we have found
|