ability to spend their time finding more data relevant to their task , and gives them
</term> to recover a <term> submanifold </term> of data from a <term> high dimensionality space </term>
Information System ) domain </term> . This data collection effort has been co-ordinated
Understanding System ) </term> , which creates the data for a <term> text retrieval application </term>
lr,11-2-P03-1058,bq automatically acquire <term> sense-tagged training data </term> from <term> English-Chinese parallel
lr,13-1-H05-1012,bq </term> based on <term> supervised training data </term> . We demonstrate that it is feasible
lr,13-4-P03-1033,bq learning </term> using real <term> dialogue data </term> collected by the <term> system </term>
lr,14-1-P03-1058,bq is the lack of <term> manually sense-tagged data </term> required for <term> supervised learning
lr,15-1-N03-1001,bq manual transcription </term> of <term> training data </term> . The method combines <term> domain
lr,17-5-H05-1095,bq better generalize from the <term> training data </term> . This paper investigates some <term>
lr,19-2-P01-1004,bq both <term> character - and word-segmented data </term> , in combination with a range of <term>
lr,2-1-N03-2003,bq </term> result . Sources of <term> training data </term> suitable for <term> language modeling
lr,27-3-H90-1060,bq the usual pooling of all the <term> speech data </term> from many <term> speakers </term> prior
lr,37-4-P03-1058,bq advantage that <term> manually sense-tagged data </term> have in their <term> sense coverage
lr,43-2-N03-2003,bq bigger performance gains from the <term> data </term> by using <term> class-dependent interpolation
lr,6-3-J05-4003,bq approach </term> , we extract <term> parallel data </term> from large <term> Chinese , Arabic
lr,7-2-N03-2003,bq In this paper , we show how <term> training data </term> can be supplemented with <term> text
lr,7-4-N03-1012,bq <term> system </term> against the <term> annotated data </term> shows that , it successfully classifies
lr,8-6-N01-1003,bq automatically learned from <term> training data </term> . We show that the trained <term> SPR
lr,9-1-H05-2007,bq <term> patterns </term> in <term> translation data </term> using <term> part-of-speech tag sequences
hide detail