ACL RD-TEC 1.0 Summarization of P01-1003

Paper Title:
IMPROVEMENT OF A WHOLE SENTENCE MAXIMUM ENTROPY LANGUAGE MODEL USING GRAMMATICAL FEATURES

Authors: Fredy A. Amaya and José Miguel Benedí

Other assigned terms:

  • bias
  • case
  • chomsky normal form
  • concept
  • conditional probability
  • context-free grammar
  • context-free grammars
  • convergence
  • derivation
  • derivation tree
  • derivation trees
  • derivations
  • distribution
  • entropy
  • estimation
  • events
  • experimental results
  • fact
  • feature
  • gaussian prior
  • grammar
  • grammar rules
  • grammars
  • grammatical features
  • grammatical framework
  • grammatical information
  • grammatical structure
  • implementation
  • interpolation
  • labeling
  • language model
  • language modeling toolkit
  • language models
  • linguistic
  • linguistics
  • log-likelihood
  • markov chain
  • maximum entropy principle
  • method
  • model parameters
  • modeling toolkit
  • mutual information
  • n-gram
  • n-gram model
  • n-gram models
  • n-grams
  • normal form
  • parse
  • parse tree
  • part-of-speech
  • part-of-speech tag
  • penn treebank
  • penn treebank corpus
  • perplexity
  • prior distribution
  • prior probability
  • probabilistic model
  • probabilities
  • probability
  • probability distribution
  • probability distributions
  • procedure
  • process
  • production rules
  • random sample
  • relation
  • relative frequency
  • sentence
  • sentences
  • statistics
  • stochastic context-free grammar
  • stochastic context-free grammars
  • structure of the sentence
  • symbol
  • symbols
  • technique
  • terminals
  • test set
  • time complexity
  • toolkit
  • training
  • training corpus
  • training data
  • training set
  • transition probabilities
  • tree
  • treebank
  • treebank corpus
  • trees
  • trigram
  • trigram model
  • vocabulary
  • word
  • word distribution
  • word sequences
  • words
  • wsj corpus

Extracted Section Types:


This page last edited on 10 May 2017.

*** ***