model,11-5-A94-1017,ak example-retrieval ( ER ) </term> , i.e. , retrieving <term> examples </term> most similar to an <term> input expression
model,19-3-A94-1017,ak translates a <term> sentence </term> utilizing <term> examples </term> effectively and performs accurate
other,10-8-A94-1017,ak between <term> APs </term> demonstrates the <term> scalability </term> against <term> vocabulary size </term>
other,12-8-A94-1017,ak the <term> scalability </term> against <term> vocabulary size </term> by <term> extrapolation </term> . Thus
other,13-6-A94-1017,ak to implement the <term> ER </term> for <term> expressions </term> including a <term> frequent word </term>
other,16-5-A94-1017,ak <term> examples </term> most similar to an <term> input expression </term> , is the most dominant part of the
other,16-6-A94-1017,ak <term> expressions </term> including a <term> frequent word </term> on <term> APs </term> . Experimental
other,17-3-A94-1017,ak Translation ) </term> , that translates a <term> sentence </term> utilizing <term> examples </term> effectively
tech,0-2-A94-1017,ak spoken language translation </term> . <term> Spoken language translation </term> requires ( 1 ) an accurate <term> translation
tech,1-5-A94-1017,ak concentrate on the second requirement . In <term> TDMT </term> , <term> example-retrieval ( ER ) </term>
tech,11-6-A94-1017,ak that we only need to implement the <term> ER </term> for <term> expressions </term> including
tech,12-1-A94-1017,ak associative processors ( APs ) </term> for <term> real-time spoken language translation </term> . <term> Spoken language translation
tech,14-9-A94-1017,ak </term> , meets the vital requirements of <term> spoken language translation </term> . <term> Japanese texts </term> frequently
tech,15-8-A94-1017,ak against <term> vocabulary size </term> by <term> extrapolation </term> . Thus , our model , <term> TDMT </term>
tech,19-6-A94-1017,ak including a <term> frequent word </term> on <term> APs </term> . Experimental results show that
tech,24-3-A94-1017,ak </term> effectively and performs accurate <term> structural disambiguation </term> and <term> target word selection </term>
tech,27-3-A94-1017,ak structural disambiguation </term> and <term> target word selection </term> . This paper will concentrate on
tech,3-5-A94-1017,ak requirement . In <term> TDMT </term> , <term> example-retrieval ( ER ) </term> , i.e. , retrieving <term> examples
tech,5-7-A94-1017,ak Experimental results show that the <term> ER </term> can be drastically speeded up . Moreover
tech,5-9-A94-1017,ak extrapolation </term> . Thus , our model , <term> TDMT </term> on <term> APs </term> , meets the vital
hide detail