(Available in versions 1.1.0 and higher.)
(UNDER CONSTRUCTION)
This operation reestimates smoothed ngram models by imposing marginalization constraints similar to those used for KneserNey modeling on Absolute Discounting models. Specifically, the algorithm modifies lowerorder distributions so that the expected frequencies of lowerorder ngrams within the model are equal to the smoothed relative frequency estimates of the baseline smoothing method. Unlike KneserNey, this algorithm may require multiple iterations to converge, due to changes in the state probabilities.
ngrammarginalize [opts] [in.mod [out.mod]] iterations: type = int, default = 1, number of iterations of steady state probability calculation max_bo_updates: type = int, default = 10, maximum within iteration updates to backoff weights output_each_iteration: type = bool, default = false, whether to output a model after each iteration in addition to final model steady_state_file: type = string, default = "", name of separate file to derive steady state probabilities 

class NGramMarginal(StdMutableFst *model); 
ngrammarginalize iterations=5 earnest.mod >earnest.marg.mod
int total_iterations = 5; vector<double> weights; for (int iteration = 1; iteration <= total_iterations; ++iteration) { StdMutableFst *model = StdMutableFst::Read("in.mod", true); NGramMarginal ngrammarg(model); ngrammarg.MarginalizeNGramModel(&weights, iteration, total_iterations); if (iteration == total_iterations) ngrammarg.GetFst().Write("out.mod"); delete model; }
Note that this method assumes that the baseline smoothed model provides smoothed relative frequency estimates for all ngrams in the model. Thus the method is not generally applicable to models trained using KneserNey smoothing, since lowerorder ngram weights resulting from that method do not represent relative frequency estimates. See reference below for further information on the algorithm.