Follow
Yuanchao Li
Title
Cited by
Cited by
Year
Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning
Y Li, T Zhao, T Kawahara
INTERSPEECH 2019, 2019
1862019
Fusing ASR outputs in joint training for speech emotion recognition
Y Li, P Bell, C Lai
IEEE ICASSP 2022, 2022
452022
Cooperative comfortable-driving at signalized intersections for connected and automated vehicles
X Shen, X Zhang, T Ouyang, Y Li, P Raksincharoensak
IEEE Robotics and Automation Letters, 2020
412020
Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human–robot interaction
Y Li, CT Ishi, K Inoue, S Nakamura, T Kawahara
Advanced Robotics, 2019
412019
Mixture density networks-based knock simulator
X Shen, T Ouyang, C Khajorntraidet, Y Li, S Li, J Zhuang
IEEE/ASME Transactions on Mechatronics, 2021
342021
Exploration of a self-supervised speech model: A study on emotional corpora
Y Li, Y Mohamied, P Bell, C Lai
IEEE SLT 2022, 2023
292023
Emotion recognition by combining prosody and sentiment analysis for expressing reactive emotion by humanoid robot
Y Li, CT Ishi, N Ward, K Inoue, S Nakamura, K Takanashi, T Kawahara
APSIPA ASC 2017, 2017
272017
Attention-based multimodal fusion for estimating human emotion in real-world HRI
Y Li, T Zhao, X Shen
ACM/IEEE HRI 2020, 2020
142020
Interactional and pragmatics-related prosodic patterns in Mandarin dialog
NG Ward, Y Li, T Zhao, T Kawahara
Speech Prosody 2016, 2016
132016
Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners
Y Li, C Lai, D Lala, K Inoue, T Kawahara
ACM/IEEE HRI 2022, 2022
82022
Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect Recognition
Y Wang*, Y Li*, PP Liang, LP Morency, P Bell, C Lai
arXiv preprint arXiv:2305.13583, 2023
7*2023
ASR and Emotional Speech: A Word-Level Investigation of the Mutual Impact of Speech and Emotion Recognition
Y Li*, Z Zhao*, O Klejch, P Bell, C Lai
INTERSPEECH 2023, 2023
62023
Feeling estimation device, feeling estimation method, and storage medium
Y Li
US Patent 11,107,464, 2021
62021
Towards improving speech emotion recognition for in-vehicle agents: Preliminary results of incorporating sentiment analysis by using early and late fusion methods
Y Li
ACM HAI 2018, 2018
62018
Multimodal Dyadic Impression Recognition via Listener Adaptive Cross-Domain Fusion
Y Li, P Bell, C Lai
IEEE ICASSP 2023, 2023
4*2023
Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and Ethics
Y Li, C Lai
ACM/IEEE HRI 2022 (Robo-Identity 2 Workshop), 2022
32022
Utterance Behavior of Users While Playing Basketball with a Virtual Teammate.
D Lala, Y Li, T Kawahara
ICAART 2017, 2017
32017
I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Y Li, K Inoue*, L Tian*, C Fu, CT Ishi, H Ishiguro, T Kawahara, C Lai
ACM CHI 2023 (Extended Abstracts), 2023
22023
Semi-supervised learning for multimodal speech and emotion recognition
Y Li
ACM ICMI 2021 (Doctoral Consortium), 817-821, 2021
22021
Layer-Wise Analysis of Self-Supervised Acoustic Word Embeddings: A Study on Speech Emotion Recognition
A Saliba*, Y Li*, R Sanabria, C Lai
IEEE ICASSP 2024 (SASB Workshop), 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20