D08-1048 sparseness is - sue , by testing smoothing techniques to better model low frequency
E03-1053 ) 1998 ) . The details of the smoothing techniques are omitted here for simplicity
D12-1075 Experiments 1 -- 3 evaluate our smoothing techniques applied directly to the task
D15-1165 . 4 Neural Networks Usually , smoothing techniques are applied to count-based models
C96-2107 plausibility ) , we propose two smoothing techniques . \ -LSB- Smoothing Method 1
E14-1068 consistency in parameter estimation and smoothing techniques . We then rank the cluster pair
C04-1167 estimation problem , but various smoothing techniques ( Goodman , 2001 ) have led to
D15-1165 unseen events without additional smoothing techniques . In the following , we will
C04-1167 from training corpus and various smoothing techniques . So the best performance can
E12-1055 with different vocabularies , smoothing techniques , and n-gram orders . One of
D12-1075 situation worse . However , with our smoothing techniques , we regain similar improvements
D08-1087 written style . Traditional n-gram smoothing techniques do not address such issues of
D10-1044 counts when using standard LM smoothing techniques ( Kneser and Ney , 1995 ) .3
D12-1075 original MIN-GREEDY setup with the smoothing techniques described above . 3.2 Improving
D09-1112 interpolated with many kind of smoothing techniques ( Chen and Goodman , 1998 ) .
C00-1070 Chen , 1996 ) , where various smoothing techniques was tested for a language model
D14-1197 2011 ; Vaswani et al. , 2012 ) or smoothing techniques ( Zhang and Chiang , 2014 ) .
E03-1053 can be computed with different smoothing techniques , including linear smoothing
D08-1093 which seems to suggest that the smoothing techniques used by the parsers employed
C04-1022 address this problem , in particular smoothing techniques ( Chen and Goodman , 1998 ) and
hide detail