#1698In this paper, we study a parsing technique whose purpose is to improve the practical efficiency of RCL parsers.
tech,6-5-P01-1007,ak
non-deterministic parsing choices
</term>
of the
<term>
main
parser
</term>
for a
<term>
language L
</term>
are directed
#1707The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L .
tech,27-5-P01-1007,ak
forest
</term>
output by a prior
<term>
RCL
parser
</term>
for a suitable
<term>
superset
</term>
#1728The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L .
tech,8-3-N03-1026,ak
Furthermore , we propose the use of standard
<term>
parser
evaluation methods
</term>
for automatically
#2847Furthermore, we propose the use of standardparser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems.
tech,14-6-H05-1064,ak
</term>
of [ ? ] 1.25 % beyond the
<term>
base
parser
</term>
, and an [ ? ] 0.25 % improvement
#5542The model gives an F-measure improvement of [?] 1.25% beyond the base parser, and an [?] 0.25% improvement beyond the Collins (2000) reranker.
tech,3-3-I05-2044,ak
Previous works on
<term>
shift-reduce dependency
parsers
</term>
may not guarantee the connectivity
#6655Previous works on shift-reduce dependency parsers may not guarantee the connectivity of a dependency tree due to their weakness at resolving the right-side dependencies.
tech,4-4-I05-2044,ak
a
<term>
two-phase shift-reduce dependency
parser
</term>
based on
<term>
SVM learning
</term>
#6682This paper proposes a two-phase shift-reduce dependency parser based on SVM learning.
#6722In experimental evaluation, our proposed method outperforms previous shift-reduce dependency parsers for the Chine language, showing improvement of dependency accuracy by 10.08%.
tech,18-4-I05-6010,ak
very positve effects on
<term>
stochastic
parsers
</term>
trained on the
<term>
treebank
</term>
#7869We argue that a more sophisticated and fine-grained annotation in the treebank would have very positve effects on stochastic parsers trained on the treebank and on grammars induced from the treebank, and it would make the treebank more valuable as a source of data for theoretical linguistic investigations.
tech,11-1-J05-1003,ak
output of an existing
<term>
probabilistic
parser
</term>
. The
<term>
base parser
</term>
produces
#8026This article considers approaches which rerank the output of an existing probabilistic parser.
tech,1-2-J05-1003,ak
probabilistic parser
</term>
. The
<term>
base
parser
</term>
produces a set of
<term>
candidate
#8030The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
tech,34-5-P05-1010,ak
comparable to that of an
<term>
unlexicalized PCFG
parser
</term>
created using extensive
<term>
manual
#8584In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (Fa5 , sentences a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.
tech,5-2-P05-1034,ak
<term>
source-language
</term><term>
dependency
parser
</term>
,
<term>
target language
</term><term>
#8869This method requires a source-language dependency parser, target language word segmentation and an unsupervised word alignment component.
tech,36-4-P05-1034,ak
linguistic generality available in a
<term>
parser
</term>
. In this paper , we present an
<term>
#8947We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in aparser.
tech,7-1-P05-1039,ak
paper , we present an
<term>
unlexicalized
parser
</term>
for
<term>
German
</term>
which employs
#8957In this paper, we present an unlexicalized parser for German which employs smoothing and suffix analysis to achieve a labelled bracket F-score of 76.2, higher than previously reported results on the NEGRA corpus.
tech,16-2-P05-1039,ak
smoothing
</term>
in an
<term>
unlexicalized
parser
</term>
allows us to better examine the interplay
#9002In addition to the high accuracy of the model, the use of smoothing in an unlexicalized parser allows us to better examine the interplay between smoothing and parsing results.
tech,26-2-P05-1076,ak
the output of a
<term>
robust statistical
parser
</term>
. It uses a powerful
<term>
pattern-matching
#10318The system incorporates a decision-tree classifier for 30 SCF types which tests for the presence of grammatical relations (GRs) in the output of a robust statistical parser.
tech,9-5-P80-1026,ak
</term>
, a
<term>
bottom-up pattern-matching
parser
</term>
that we have designed and implemented
#13704We go, on to describe FlexP, a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system.
tech,1-2-P81-1032,ak
structures
</term>
. Although
<term>
single-strategy
parsers
</term>
have met with a measure of success
#13748Although single-strategy parsers have met with a measure of success, a multi-strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input.
tech,2-1-P81-1033,ak
multi-strategy framework . A flexible
<term>
parser
</term>
can deal with
<term>
input
</term>
that
#13871A flexibleparser can deal with input that deviates from its grammar, in addition to input that conforms to it.