Follow
Panikos Heracleous
Panikos Heracleous
Verified email at aist.go.jp
Title
Cited by
Cited by
Year
Accurate hidden Markov models for non-audible murmur (NAM) recognition based on iterative supervised adaptation
P Heracleous, Y Nakajima, A Lee, H Saruwatari, K Shikano
2003 IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE …, 2003
582003
Lip shape and hand position fusion for automatic vowel recognition in cued speech for french
P Heracleous, N Aboutabit, D Beautemps
IEEE Signal Processing Letters 16 (5), 339-342, 2009
422009
Acoustic-to-articulatory inversion using speech recognition and trajectory formation based on phoneme hidden Markov models
AB Youssef, P Badin, G Bailly, P Heracleous
Interspeech 2009-10th Annual Conference of the International Speech …, 2009
372009
Unvoiced speech recognition using tissue-conductive acoustic sensor
P Heracleous, T Kaino, H Saruwatari, K Shikano
EURASIP Journal on Advances in Signal Processing 2007, 1-11, 2006
372006
Analysis and recognition of NAM speech using HMM distances and visual information
P Heracleous, VA Tran, T Nagai, K Shikano
IEEE transactions on Audio, speech, and language processing 18 (6), 1528-1538, 2009
362009
Comparative study on spoken language identification based on deep learning
P Heracleous, K Takai, K Yasuda, Y Mohammad, A Yoneyama
2018 26th European signal processing conference (EUSIPCO), 2265-2269, 2018
322018
Cued speech automatic recognition in normal-hearing and deaf subjects
P Heracleous, D Beautemps, N Aboutabit
Speech Communication 52 (6), 504-512, 2010
312010
A comprehensive study on bilingual and multilingual speech emotion recognition using a two-pass classification scheme
P Heracleous, A Yoneyama
PloS one 14 (8), e0220386, 2019
302019
Prediction models for risk of type-2 diabetes using health claims
M Nagata, K Takai, K Yasuda, P Heracleous, A Yoneyama
Proceedings of the BioNLP 2018 workshop, 172-176, 2018
222018
Non-audible murmur (NAM) speech recognition using a stethoscopic NAM microphone
P Heracleous, Y Nakajima, A Lee, H Saruwatari, K Shikano
222004
Implicit knowledge injectable cross attention audiovisual model for group emotion recognition
Y Wang, J Wu, P Heracleous, S Wada, R Kimura, S Kurihara
Proceedings of the 2020 international conference on multimodal interaction …, 2020
192020
Automatic recognition of speech without any audio information
P Heracleous, N Hagita
2011 IEEE International Conference on Acoustics, Speech and Signal …, 2011
182011
Simultaneous recognition of multiple sound sources based on 3-D N-best search using a microphone array
P Heracleous, T Yamada, S Nakamura, K Shikano
181999
Analysis of the visual Lombard effect and automatic recognition experiments
P Heracleous, CT Ishi, M Sato, H Ishiguro, N Hagita
Computer Speech & Language 27 (1), 288-300, 2013
172013
Visual-speech to text conversion applicable to telephone communication for deaf individuals
P Heracleous, H Ishiguro, N Hagita
2011 18th International Conference on Telecommunications, 130-133, 2011
162011
Speech emotion recognition in noisy and reverberant environments
P Heracleous, K Yasuda, F Sugaya, A Yoneyama, M Hashimoto
2017 Seventh International Conference on Affective Computing and Intelligent …, 2017
142017
Continuous phoneme recognition in cued speech for french
P Heracleous, D Beautemps, N Hagita
2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO …, 2012
142012
A tissue-conductive acoustic sensor applied in speech recognition for privacy
P Heracleous, Y Nakajima, H Saruwatari, K Shikano
Proceedings of the 2005 joint conference on Smart objects and ambient …, 2005
142005
Unsupervised energy disaggregation using conditional random fields
P Heracleous, P Angkititrakul, N Kitaoka, K Takeda
IEEE PES Innovative Smart Grid Technologies, Europe, 1-5, 2014
132014
Audible (normal) speech and inaudible murmur recognition using NAM microphone
P Heracleous, Y Nakajima, A Lee, H Saruwatari, K Shikano
2004 12th European Signal Processing Conference, 329-332, 2004
132004
The system can't perform the operation now. Try again later.
Articles 1–20