I05-2027 |
helps to illustrate the sentence
|
compression process
|
. The part of speech tags in
|
D15-1012 |
should be kept or dropped . Our
|
compression process
|
for each sentence s is displayed
|
P06-1048 |
be inspired from the underlying
|
compression process
|
. Finding such a mechanism is
|
C94-1047 |
Used this in combination with the
|
compression process
|
described in SS 1.2 , tile final
|
D15-1012 |
framework forms a joint selection and
|
compression process
|
. In our phrase-based scoring
|
D15-1012 |
. Later we will see that this
|
compression process
|
will not hurt grammatical fluency
|
E09-1007 |
techniques can be seen as data
|
compression processes
|
. However , a simple NEs hard
|
C94-1047 |
rmat . I : or . 'm idea of the
|
compression process
|
readers can refer to \ -LSB-
|
D15-1012 |
summary . We will also address the
|
compression process
|
for each sentence as finding
|
P06-2019 |
corpus data for modelling the
|
compression process
|
without recourse to extensive
|
D15-1012 |
being translated to Chinese . The
|
compression process
|
follows from an integer linear
|
P06-1048 |
shift-reduce parsing paradigm . The
|
compression process
|
starts with an empty stack and
|
D13-1047 |
compression model to capture the guided
|
compression process
|
using a set of word - , sentence
|
W04-1015 |
of the decisions taken in the
|
compression process
|
can still be correct . This makes
|
P06-2019 |
create a smaller tree , s . The
|
compression process
|
begins with an input list generated
|
D13-1047 |
provide guidance to the human
|
compression process
|
by specifying a set of " important
|
C94-1047 |
( Arabic , Turkish , ... ) . A
|
compression process
|
on the muir ) lingual dictionaries
|
P13-1136 |
motivation and query relevance into the
|
compression process
|
by deriving a novel formulation
|
D14-1051 |
could be seen as a probabilistic
|
compression process
|
that reduces the dimensionality
|
C94-1047 |
Table 1 gives some results of the
|
compression process
|
for a few Europeau languagcs
|