lr-prod,11-3-P99-1080,ak to be tested on a standard task , <term> The Wall Street Journal </term> , allowing a fair comparison with
measure(ment),11-1-P99-1080,ak </term> to the problem of assigning <term> probabilities </term> to <term> words </term> following a given
model,1-3-P99-1080,ak questions </term> is considered . The <term> model </term> is to be tested on a standard task
model,23-3-P99-1080,ak fair comparison with the well-known <term> tri-gram model </term> .
model,4-2-P99-1080,ak </term> . In contrast with previous <term> decision-tree language model attempts </term> , an <term> algorithm </term> for selecting
other,13-1-P99-1080,ak assigning <term> probabilities </term> to <term> words </term> following a given <term> text </term>
other,13-2-P99-1080,ak <term> algorithm </term> for selecting <term> nearly optimal questions </term> is considered . The <term> model </term>
other,17-1-P99-1080,ak <term> words </term> following a given <term> text </term> . In contrast with previous <term>
tech,10-2-P99-1080,ak language model attempts </term> , an <term> algorithm </term> for selecting <term> nearly optimal
tech,4-1-P99-1080,ak language </term> . This paper discusses a <term> decision-tree approach </term> to the problem of assigning <term>
hide detail