Compare Products
![]() |
![]() |
Features • Decoder for large vocabulary continuous speech recognition
◦word conditioned tree search (supporting across-word models)
◦optimized HMM emission probability calculation using SIMD instructions
◦refined acoustic pruning using language model lookahead
◦word lattice generation
• Feature extraction
◦a flexible framework for data processing: Flow
◦MFCC features
◦PLP features
◦Gammatone features
◦voicedness feature
◦vocal tract length normalization (VTLN)
◦support for several feature dimension reduction methods (e.g. LDA, PCA)
◦easy implementation of new features as well as easy integration of external features using Flow networks
• Acoustic modeling
◦Gaussian mixture distributions for HMM emission probabilities
◦phoneme in triphone context (or shorter context)
◦across-word context dependency of phonemes
◦allophone parameter tying using phonetic decision trees (classification and regression trees, CART)
◦globally pooled diagonal covariance matrix (other types of covariance modelling are possible, but not fully tested)
◦maximum likelihood training
◦discriminative training (minimum phone error (MPE) criterion)
◦linear algebra support using LAPACK, BLAS
• Language modeling
◦support for language models in ARPA format
◦weighted grammars (weighted finite state automaton)
• Neural networks (new in v0.6)
◦training of arbitrarily deep feed-forward networks
◦CUDA support for running on GPUs
◦OpenMP support for running on CPUs
◦variety of activation functions, training criteria and optimization algorithms
◦sequence discriminative training, e.g. MMI or MPE (new in v0.7)
◦integration in feature extraction pipeline ("Tandem approach")
◦integration in search and lattice processing pipeline ("Hybrid NN/HMM approach")
• Speaker adaptation
◦Constrained MLLR (CMLLR, "feature space MLLR", fMLLR)
◦Unsupervised maximum likelihood linear regression mean adaptation (MLLR)
◦speaker / segment clustering using Bayesian Information Criterion (BIC) as stop criterion
• Lattice processing
◦n-best list generation
◦confusion network generation and decoding
◦lattice rescoring
◦lattice based system combination
•input / output formats ◦nearly all input and output data is in easily process-able XML or plain text formats
◦converter tools for the generation of NIST file formats are included
◦HTK lattice format
◦converter tools for HTK models
|
Features • Multiple platforms: Runs on numerous platforms and operating systems from small embedded to large server systems.
• Neural network technology: Provides accurate speaker-independent speech recognition, even in noisy environments.
• Efficient memory and CPU usage: Operates within memory and MIPs constraints or runs many concurrent channels on a server system. See the product specification sheet for details.
• Multiple languages: Is available in U.S. and U.K. English, Canadian and European French, German, Japanese, Korean, Castilian and Latin American Spanish, Swedish and Italian.
• Phonetic component: Aligns word phonetics with audio data, enabling animators to synchronize a character's facial movements with the phonetic components of speech. Realistic facial movements are the result as animated characters "speak."
|
LanguagesCPP |
LanguagesC CPP CS Java VB.NET |
Source TypeClosed
|
Source TypeClosed
|
License TypeProprietary |
License TypeProprietary |
OS Type |
OS Type |
Pricing
|
Pricing
|
X
Compare Products
Select up to three two products to compare by clicking on the compare icon () of each product.
{{compareToolModel.Error}}Now comparing:
{{product.ProductName | createSubstring:25}} X