Keywords:
- Acoustic features
- acoustic modeling
- Adaboost
- Alzheimer's disease
- Anti-spoofing
- articulatory features
- Artificial Neural Networks
- ASR
- atypical speech
- audio deepfake
- Automatic accent assessment
- Automatic accent evaluation
- automatic gender recognition
- Automatic speaker verification (ASV)
- Automatic Speech Recognition
- automatic subword unit derivation
- bag of audio words
- bandwidth
- Benchmarking
- Binary features
- binary masking
- bioacoustics
- Blizzard Challenge
- BoAW
- boosting
- breathing pattern estimation
- breathing patterns
- call type classification
- call-type and caller classification
- Children speech recognition
- Classification
- CNN visualization
- ComParE features
- computational efficiency
- Conditional Random Fields
- confidence measures
- continuous speech recognition boosted binary features resource management
- Convolution Neural Network
- Convolutional neural network
- Convolutional Neural Networks
- COVID-19 identification
- cross-database
- cross-transfer knowledge
- Customer satisfaction
- deep learning
- deep neural networks
- deepfake detection
- depression detection
- detection
- Direction of arrival estimation
- discretization
- domain adaptation
- dynamic programming
- Dysarthria
- Dysarthric speech
- embedding
- Emotion Recognition
- emotional prosody
- end-to-end acoustic modeling
- End-to-end learning
- end-to-end modelling
- end-to-end training
- expected performance and spoofability curve
- Expressive Vocalizations
- feature representations
- feature selection
- Few-shot learning
- fine-tuning
- Finetuning
- fixed-size word patterns
- Formant identification
- Formants
- Foundation Model
- Foundation Models
- Fundamental frequency
- Fusion
- Gaussian mixture
- glottal source signals.
- grapheme
- Grapheme subword units
- grapheme subwords
- grapheme-to- phoneme conversion
- grapheme-to-phoneme conversion
- grapheme-to-phoneme converter
- Graphemes
- Hidden Markov Model
- hidden Markov models
- human skeleton estimation
- human speech
- hypoglycemia
- integration of ASV and anti-spoofing
- Inter-pretable Models
- Interpretable features
- isolated word recognition
- Kalman filters
- KL-divergence
- KL-HMM
- Kullback-Leibler divergence
- Kullback-Leibler divergence based hidden Markov model
- Kullback-Leibler divergence based HMM
- Kullback–Leibler divergence based hidden Markov model
- language disorder
- Language Production
- Large Language Models
- leaderboard
- letter-to-sound rules
- lexical model
- Lexical modeling
- Lexicon
- local posterior probability
- localization
- long-term statistics
- LoRA
- low level descriptors
- Low resource language
- machine learning
- Mental Lexicon
- microphone array
- microphone arrays
- mobile biometrics
- modalities fusion
- modified ZFF
- multi- layer perceptron
- Multi-modal Approach
- multi-stream combination
- Multi-task learning
- multilayer perceptron
- multilayer perceptron network
- Multilingual
- multilingual acoustic modeling
- Multimodal
- multiple linear regression
- Multiple speaker localization
- multiple speakers
- multiple-stream combination
- multitask learning
- neural network
- neurocomputational models
- Noise Robustness
- non-native speech
- non-native speech recognition
- noninvasive
- Objective Evaluation
- Objective intelligibility
- Objective intelligibility Assessment
- objective measures
- overlapping speech recognition
- Paralinguistic speech processing
- Parkinson's disease
- Parkinson's disease detection
- Parkinson’s disease
- parts-based approach
- Pathological speech
- Pathological Speech Processing
- PC-GITA
- Peft
- Perceived fluency
- phoneme
- phoneme modeling
- Phoneme recognition
- phoneme subword units
- phoneme subwords
- phonemes
- Phonetic information
- phonetic representation
- Phonocardiogram
- Posterior features
- posterior probabilities
- pre-trained embedding
- pre-training domain
- predictive coding
- presentation attack
- Presentation Attack Detection
- probabilistic lexical modeling
- pronunciation generation
- pronunciation lexicon
- quantization
- Raw Speech
- raw waveform modelling
- raw waveforms
- raw-waveform cnn
- Reading Assessment
- recognition
- recurrent neural network
- Respiratory parameters
- S1-S2 detection
- Scottish Gaelic
- segment-level training.
- Self-Organizing Maps
- Self-supervised embedding
- self-supervised learning
- sentence mode prediction
- sign language assessment
- Sign language processing
- signal processing
- sleepiness
- speaker verification
- speaker-specific features
- spectral statistics
- Speech Analysis
- speech and audio
- speech assessment
- Speech breathing
- Speech Emotion Recognition
- Speech enhancement
- Speech for health
- Speech Foundation Models
- Speech in health
- Speech intelligibility
- speech pathology detection
- speech recognition
- speech recognition.
- speech separation
- speech synthesis
- Speech technology
- Spoken Language Understanding
- Spoofing
- spoofing detection
- Steered response power
- String matching
- SVM
- syllable-level-features
- syllables
- synthetic reference templates.
- Synthetic speech
- TANDEM features
- template-based approach
- template-based system
- Text classification
- text-to-speech synthesis
- token sequences
- tracking
- under-resource speech recognition
- under-resourced languages
- universal phoneme set
- unsupervised adaptation
- utterance verification
- voice
- voice activity detection
- Voice Conversion
- wav2vec2.0
- zero frequency filter
- Zero frequency filtering
- zero-frequency filtering
- zero-resourced speech recognition
- Zero-shot Speech Synthesis
Publications of Mathew Magimai-Doss sorted by first author
P
| End-to-End Acoustic Modeling using Convolutional Neural Networks for Automatic Speech Recognition, , and , Idiap-RR-18-2016 |
|
| Convolutional Neural Networks-based Continuous Speech Recognition using Raw Speech Signal, , and , in: International Conference on Acoustics, Speech and Signal Procecssing, IEEE, South Brisbane, QLD, pages 4295 - 4299, IEEE, 2015 |
|
| Analysis of CNN-based Speech Recognition System using Raw Speech as Input, , and , Idiap-RR-23-2015 |
|
| Learning linearly separable features for speech recognition using convolutional neural networks, , and , Idiap-RR-24-2015 |
[URL] |
| Analysis of CNN-based Speech Recognition System using Raw Speech as Input, , and , in: Proceedings of Interspeech, ISCA, Dresden, pages 11-15, ISCA, 2015 |
|
| Raw Speech Signal-based Continuous Speech Recognition using Convolutional Neural Networks, , and , Idiap-RR-15-2014 |
|
| Convolutional Neural Networks-based Continuous Speech Recognition using Raw Speech Signal, , and , Idiap-RR-18-2014 |
|
| Joint Phoneme Segmentation Inference and Classification using CRFs, , and , in: Global Conference on Signal and Information Processing, Atlanta, GA, pages 587 - 591, IEEE, 2014 |
[DOI] |
| Cross-transfer Knowledge between Speech and Text Encoders to Evaluate Customer Satisfaction, , , , and , in: Proceedings of Interspeech, Kos Island, Greece, ISCA, 2024 |
|
| Privacy-Sensitive Audio Features for Speech/Nonspeech Detection, , , and , Idiap-RR-12-2011 |
|
| Privacy-Sensitive Audio Features for Speech/Nonspeech Detection, , , and , in: IEEE Transactions on Audio, Speech, and Language Processing, 19(8), 2011 |
|
| Evaluating the Robustness of Privacy-Sensitive Audio Features for Speech Detection in Personal Audio Log Scenarios, , , and , Idiap-RR-01-2010 |
|
| Evaluating the Robustness of Privacy-Sensitive Audio Features for Speech Detection in Personal Audio Log Scenarios, , , and , in: ICASSP 2010, 2010 |
|
| Investigating Privacy-Sensitive Features for Speech Detection in Multiparty Conversations, , , and , Idiap-RR-12-2009 |
|
| Investigating Privacy-Sensitive Features for Speech Detection in Multiparty Conversations, , , and , in: Proceedings of Interspeech 2009, 2009 |
|
| Speaker Change Detection with Privacy-Preserving Audio Cues, , , and , Idiap-RR-23-2009 |
|
| Speaker Change Detection with Privacy-Preserving Audio Cues, , , and , in: Proceedings of ICMI-MLMI 2009, 2009 |
|
| Hierarchical Tandem Features for ASR in Mandarin, , and , in: Proceedings of Interspeech, 2011 |
| Exploiting Contextual Information for Improved Phoneme Recognition, , , and , in: "IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP)", 2008 |
|
| Hierarchical Tandem Features for ASR in Mandarin, , and , Idiap-RR-39-2010 |
|
| MLP Based Hierarchical System for Task Adaptation in ASR, , and , in: Proceedings of the IEEE workshop on Automatic Speech Recognition and Understanding, Merano, Italy, 2009 |
|
| Volterra Series for Analyzing MLP based Phoneme Posterior Probability Estimator, , , and , in: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2009 |
|
| Volterra Series for Analyzing MLP based Phoneme Posterior Probability Estimator, , , and , Idiap-RR-69-2008 |
|
| Analysis of MLP Based Hierarchical Phoneme Posterior Probability Estimator, , , , and , in: IEEE Transcations on Audio, Speech, and Language Processing, 19(2):225-241, 2011 |
|
| Exploiting Contextual Information for Improved Phoneme Recognition, , , and , Idiap-RR-65-2007 |
|
| Using Commercial ASR Solutions to Assess Reading Skills in Children: A Case Report, , , , , and , in: Proceedings of Interspeech, pages 4573-4577, 2023 |
[DOI] [URL] |
| Identification of F1 and F2 in speech using modified zero frequency filtering, and , in: Proceedings of Interspeech, 2021 |
|
| Detection of S1 and S2 locations in phonocardiogram signals using zero frequency filter, , , and , in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020 |
|
| Analysis of F0 and Cepstral Features for Robust Automatic Gender Recognition, and , Idiap-RR-30-2009 |
|
| Integrating audio and vision for robust automatic gender recognition, and , Idiap-RR-73-2008 |
|
| Comparing supervised and self-supervised embedding for ExVo Multi-Task learning track, , , and , in: Proceedings of the ICML Expressive Vocalizations Workshop held in conjunction with the 39th International Conference on Machine Learning, Maryland, USA, 2022 |
|
| Emotion information recovery potential of wav2vec2 network fine-tuned for speech recognition task, and , in: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hyderabad, India, IEEE, 2025 |
|
| Automatic Parkinson’s disease detection from speech: Layer selection vs adaptation of foundation models, , , and , in: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hyderabad, India, IEEE, 2025 |
|
| On Detection of Depression in Parkinson's Disease Patients' Speech: Handcrafted Features vs. Speech Foundation Models, , , and , in: Automatic Assessment of Parkinsonian Speech, Springer Nature Switzerland AG, 2025 |
[URL] |
| Implicit phonetic information modeling for speech emotion recognition, , and , in: Proceedings of Interspeech, Dublin, Ireland, ISCA, 2023 |
|
| Towards learning emotion information from short segments of speech, , , , and , in: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Rhodes Island, Greece, IEEE, 2023 |
|
R
| Grapheme and Multilingual Posterior Features for Under-Resourced Speech Recognition: A Study on Scottish Gaelic, , and , in: IEEE International Conference on Acoustics, Speech and Signal Processing, 2013 |
|
| Grapheme and Multilingual Posterior Features For Under-Resource Speech Recognition: A Study on Scottish Gaelic, , and , Idiap-RR-34-2012 |
|
| HMM-based Non-native Accent Assessment using Posterior Features, , and , in: Proceedings of Interspeech, San Francisco, USA, 2016 |
|
| HMM-based Non-native Accent Assessment using Posterior Features, , and , Idiap-RR-32-2015 |
|
| Automatic Accentedness Evaluation of Non-Native Speech Using Phonetic and Sub-Phonetic Posterior Probabilities, , , and , Idiap-RR-12-2015 |
|
| Automatic Accentedness Evaluation of Non-Native Speech Using Phonetic and Sub-Phonetic Posterior Probabilities, , , and , in: Proceedings of Interspeech, 2015 |
|
| Articulatory feature based continuous speech recognition using probabilistic lexical modeling, and , in: Computer Speech and Language, 36:233-259, 2016 |
[DOI] |