Resumen
In this paper, a hybrid language model which combines a word-based n-gram and a category-based Stochastic Context-Free Grammar (SCFG) is evaluated for training data sets of increasing size. Different estimation algorithms for learning SCFGs in General Format and in Chomsky Normal Form are considered. Experiments on the UPenn Treebank corpus are reported. These experiments have been carried out in terms of the test set perplexity and the word error rate in a speech recognition experiment.
Idioma original | Inglés |
---|---|
Páginas (desde-hasta) | 586-594 |
Número de páginas | 9 |
Publicación | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Volumen | 3523 |
N.º | II |
DOI | |
Estado | Publicada - 2005 |
Evento | Second Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA 2005 - Estoril, Portugal Duración: 07 jun. 2005 → 09 jun. 2005 |