NGramShrink

Description

This operation shrinks or prunes an n-gram language model in one of three ways:

  • count pruning: prunes based on count cutoffs for the various n-gram orders specified by count_pattern.
  • relative entropy: prunes based on a relative entropy criterion theta.
  • Seymore: prunes based on the Seymore-Rosenfeld criterion theta.

The C++ classes are all derived from the base class NGramShrink.

Usage

ngramshink [--opts] [in.mod [out.mod]]
  --method: type = string, one of: count_prune (default) | relative_entropy | seymore
  --count_pattern: type = string, default = ""
  --theta, type = double, default = 0.0
 
 class NGramCountPrune(StdMutableFst *model, string count_pattern);
 
 class NGramRelativeEntropy(StdMutableFst *model, double theta);
 
 class NGramSeymoreShrink(StdMutableFst *model, double theta);
 

Examples

ngramshrink --method=relative_entropy --theta=1.0e-7 in.mod >out.mod


StdMutableFst *model = StdMutableFst::Read("in.mod");
NGramRelativeEntropy ngram(model, 1.0e-7);
ngram.ShrinkModel()
ngram.GetFst().Write("out.mod");

Caveats

The input n-gram model must be weight-normalized (the probabilities at each state must sum to 1).

References

K. Seymore and R. Rosenfeld. "Scalable Backoff Language Models", Proc. of International Conference on Speech and Language Processing. 1996.

A. Stolcke. "Entropy-based Pruning of Backoff Language Models", Proc. of DARPA Broadcast News Transcription and Understanding Workshop. 1998.

-- MichaelRiley - 09 Dec 2011

Edit | Attach | Watch | Print version | History: r7 | r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r3 - 2011-12-13 - MichaelRiley
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback