PUBLICATIONS:
Journals
13. Iván López-Espejo, Eros Roselló, Amin Edraki, Naomi Harte and Jesper Jensen: “Noise-Robust Hearing Aid Voice Control”. IEEE Signal Processing Letters, vol. 32, pp. 241-245, January 2025. ![]()
12. Iván López-Espejo, Amin Edraki, Wai-Yip Chan, Zheng-Hua Tan and Jesper Jensen: “On the deficiency of intelligibility metrics as proxies for subjective intelligibility”. Elsevier Speech Communication, vol. 150, pp. 9-22, May 2023. ![]()
11. Georg Ø. Rønsch, Iván López-Espejo, Daniel Michelsanti, Yuying Xie, Petar Popovski and Zheng-Hua Tan: “Utilization of Acoustic Signals with Generative Gaussian and Autoencoder Modeling for Condition-Based Maintenance of Injection Moulds”. International Journal of Computer Integrated Manufacturing, pp. 1-16, October 2022. ![]()
10. Santi Prieto, Alfonso Ortega, Iván López-Espejo and Eduardo Lleida: “Shouted and Whispered Speech Compensation for Speaker Verification Systems”. Digital Signal Processing, vol. 127, 103536, July 2022. ![]()
9. Iván López-Espejo, Zheng-Hua Tan, John H. L. Hansen and Jesper Jensen: “Deep Spoken Keyword Spotting: An Overview”. IEEE Access, vol. 10, pp. 4169-4199, January 2022. ![]()
8. Iván López-Espejo, Zheng-Hua Tan and Jesper Jensen: “A Novel Loss Function and Training Strategy for Noise-Robust Keyword Spotting”. IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 29, pp. 2254-2266, June 2021. ![]()
7. Iván López-Espejo, Zheng-Hua Tan and Jesper Jensen: “Improved External Speaker-Robust Keyword Spotting for Hearing Assistive Devices”. IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 28, pp. 1233-1247, April 2020. ![]()
6. Juan M. Martín-Doñas, Antonio M. Peinado, Iván López-Espejo and Ángel M. Gómez: “Dual-Channel Speech Enhancement Based on Extended Kalman Filter Relative Transfer Function Estimation”. MDPI Applied Sciences, vol. 9, June 2019. ![]()
5. Iván López-Espejo, Antonio M. Peinado, Ángel M. Gómez and José A. González: “Dual-Channel Spectral Weighting for Robust Speech Recognition in Mobile Devices”. Digital Signal Processing, vol. 75, pp. 13-24, April 2018. ![]()
4. Iván López-Espejo, Antonio M. Peinado, Ángel M. Gómez and José A. González: “Dual-Channel VTS Feature Compensation for Noise-Robust Speech Recognition on Mobile Devices”. IET Signal Processing, vol. 11, pp. 17-25, February 2017. ![]()
3. Iván López-Espejo, Antonio M. Peinado, Ángel M. Gómez and Juan M. Martín-Doñas: “Deep Neural Network-Based Noise Estimation for Robust ASR in Dual-Microphone Smartphones”. Lecture Notes in Artificial Intelligence, vol. 10077, pp. 117–127, November 2016.
![]()
2. Nadir Benamirouche, Bachir Boudraa, Ángel M. Gómez, José L. Pérez-Córdoba and Iván López-Espejo: “A Dynamic FEC for Improved Robustness of CELP-Based Codec”. Lecture Notes in Artificial Intelligence, vol. 10077, pp. 14–23, November 2016. ![]()
1. Iván López-Espejo, José A. González, Ángel M. Gómez and Antonio M. Peinado: “A Deep Neural Network Approach for Missing-Data Mask Estimation on Dual-Microphone Smartphones: Application to Noise-Robust Speech Recognition”. Lecture Notes in Artificial Intelligence, vol. 8854, pp. 119–128, November 2014.
![]()
Conferences
31. Ram C. M. C. Shekar and Iván López-Espejo: “LIWhiz: A Non-Intrusive Lyric Intelligibility Prediction System for the Cadenza Challenge”. Proc. ICASSP Workshops, Barcelona (Spain), May 2026. ![]()
30. J. A. González López, N. Calet Ruiz, P. Checa Martínez, Á. Gómez García, M. Hervás Torres, I. López-Espejo, R. Magán Carrión, J. Maldonado Valderrama, A. Martín Molina, M. Ouellet, A. Peinado Herreros, J. L. Pérez Córdoba, R. Rodríguez Gómez and I. Simpson: “Desarrollo de TutorGPT: Un Proyecto de Innovación Docente”. XXII International Forum on Evaluation of the Quality of Research and Higher Education (FECIES), Virtual, June 2025.
29. D. A. Gross Ramírez, R. Liébana Ramón, O. Mujtaba Khanday, N. Calet Ruiz, P. Checa Fernández, M. Hervás Torres, I. C. Simpson, J. L. Pérez Córdoba, A. Galdón, G. Olivares, A. M. Gómez García, I. López-Espejo, R. Magán Carrión, R. A. Rodríguez Gómez, A. M. Peinado Herreros, J. Maldonado Valderrama, A. Martín Molina, L. Miccoli, M. Ouellet and J. A. González López: “IA al Servicio del Conocimiento y la Salud: De la Tutoría Académica a la Restauración del Lenguaje”. VII Jornadas de Investigación CIMCYC, Granada (Spain), June 2025.
28. Iván López-Espejo: “Sobre el uso de IA en la curaduría de las comunicaciones de voz de las misiones Apolo de la NASA”. Primer Encuentro Iberoamericano sobre Archivos e Inteligencia Artificial, Virtual, October 2024.
![]()
27. Angel M. Gómez, Antonio M. Peinado, Victoria E. Sánchez, Iván López-Espejo, Alejandro Gómez-Alanis, Eros Roselló, José C. Sánchez-Valera and Juan M. Martín-Doñas: “Signal and Neural Processing against Spoofing Attacks and Deepfakes for Secure Voice Interaction (ASASVI)”. Proc. IberSPEECH, Aveiro (Portugal), November 2024. ![]()
26. Iván López-Espejo, Aditya Joglekar, Antonio M. Peinado and Jesper Jensen: “On Speech Pre-emphasis as a Simple and Inexpensive Method to Boost Speech Enhancement”. Proc. IberSPEECH, Aveiro (Portugal), November 2024.
![]()
25. Juan M. Martín-Doñas, Eros Roselló, Angel M. Gomez, Aitor Álvarez, Iván López-Espejo and Antonio M. Peinado: “ASASVIcomtech: The Vicomtech-UGR Speech Deepfake Detection and SASV Systems for the ASVspoof5 Challenge”. Proc. ASVspoof Workshop, Kos (Greece), August 2024.
![]()
24. Femke B. Gelderblom, Tron V. Tronstad and Iván López-Espejo: “Evaluating Speech Enhancement Systems Through Listening Effort”. Proc. IWAENC, Aalborg (Denmark), September 2024.
![]()
23. Silas Antonisen and Iván López-Espejo: “PolySinger: Singing-Voice to Singing-Voice Translation from English to Japanese”. Proc. ISMIR, San Francisco (USA), November 2024.
![]()
22. Eros Rosello, Angel M. Gomez, Iván López-Espejo, Antonio M. Peinado and Juan M. Martín-Doñas: “Anti-spoofing Ensembling Model: Dynamic Weight Allocation in Ensemble Models for Improved Voice Biometrics Security”. Proc. INTERSPEECH, Kos (Greece), September 2024. ![]()
21. Haolan Wang, Amin Edraki, Wai-Yip Chan, Iván López-Espejo and Jesper Jensen: “No-Reference Speech Intelligibility Prediction Leveraging a Noisy-Speech ASR Pre-Trained Model”. Proc. INTERSPEECH, Kos (Greece), September 2024. ![]()
20. Satwik Dutta, Iván López-Espejo, Dwight Irvin and John H. L. Hansen: “Joint Language and Speaker Classification in Naturalistic Bilingual Adult-Toddler Interactions”. Proc. Odyssey, Quebec (Canada), June 2024. ![]()
19. Aditya Joglekar, Iván López-Espejo and John H. L. Hansen: “Fearless Steps APOLLO: Identifying Conversational Mission-Critical Topics in NASA Apollo Missions Audio Based on Keyword Spotting”. NASA Human Research Program IWS, Galveston (USA), February 2024.
18. Iván López-Espejo, Santi Prieto, Alfonso Ortega and Eduardo Lleida: “Improved Vocal Effort Transfer Vector Estimation for Vocal Effort-Robust Speaker Verification”. Proc. IEEE MLSP, Rome (Italy), September 2023.
![]()
17. Iván López-Espejo, Ram C. M. C. Shekar, Zheng-Hua Tan, Jesper Jensen and John H. L. Hansen: “Filterbank Learning for Noise-Robust Small-Footprint Keyword Spotting”. Proc. ICASSP, Rhodes Island (Greece), June 2023.
![]()
16. Aditya Joglekar, Iván López-Espejo and John H. L. Hansen: “Fearless Steps APOLLO: Challenges in keyword spotting and topic detection for naturalistic audio streams”. 184th Meeting of the Acoustical Society of America, Chicago (USA), May 2023.
15. Ángel M. Gómez, Victoria E. Sánchez, Antonio M. Peinado, Juan M. Martín-Doñas, Alejandro Gómez-Alanís, Amelia Villegas-Morcillo, Eros Roselló, Manuel Chica, Celia García and Iván López-Espejo: “Fusion of Classical Digital Signal Processing and Deep Learning Methods (FTCAPPS)”. Proc. IberSPEECH, Granada (Spain), November 2022. ![]()
14. Iván López-Espejo, Zheng-Hua Tan and Jesper Jensen: “An Experimental Study on Light Speech Features for Small-Footprint Keyword Spotting”. Proc. IberSPEECH, Granada (Spain), November 2022.
![]()
13. Juan M. Martín-Doñas, Antonio M. Peinado, Iván López-Espejo and Ángel M. Gómez: “Dual-channel eKF-RTF Framework for Speech Enhancement with DNN-based Speech Presence Estimation”. Proc. IberSPEECH, Valladolid (Spain), March 2021. ![]()
12. Santi Prieto, Alfonso Ortega, Iván López-Espejo and Eduardo Lleida: “Shouted Speech Compensation for Speaker Verification Robust to Vocal Effort Conditions”. Proc. INTERSPEECH, Shanghai (China), October 2020. ![]()
11. Iván López-Espejo, Zheng-Hua Tan and Jesper Jensen: “Exploring Filterbank Learning for Keyword Spotting”. Proc. EUSIPCO, Amsterdam (The Netherlands), January 2021.
![]()
10. Iván López-Espejo, Zheng-Hua Tan and Jesper Jensen: “Keyword Spotting for Hearing Assistive Devices Robust to External Speakers”. Proc. INTERSPEECH, Graz (Austria), September 2019.
![]()
9. Iván López-Espejo, Santiago Prieto-Calero and Ana Iriarte-Ruiz: “X-Vector Speaker Verification System for NIST SRE 2018”. NIST SRE 2018 Evaluation Workshop, Athens (Greece), December 2018.
8. Juan M. Martín-Doñas, Iván López-Espejo, Ángel M. Gómez and Antonio M. Peinado: “A postfiltering approach for dual-microphone smartphones”. Proc. IberSPEECH, Barcelona (Spain), November 2018. ![]()
7. Iván López-Espejo, Antonio M. Peinado, Ángel M. Gómez, José A. González and Santiago Prieto-Calero: “Dual-Channel VTS Feature Compensation with Improved Posterior Estimation”. Proc. EUSIPCO, Rome (Italy), September 2018.
![]()
6. Juan M. Martín-Doñas, Iván López-Espejo, Ángel M. Gómez and Antonio M. Peinado: “An Extended Kalman Filter for RTF Estimation in Dual-Microphone Smartphones”. Proc. EUSIPCO, Rome (Italy), September 2018. ![]()
5. Iván López-Espejo, Juan M. Martín-Doñas, Ángel M. Gómez and Antonio M. Peinado: “Unscented Transform-Based Dual-Channel Noise Estimation: Application to Speech Enhancement on Smartphones”. Proc. TSP, Athens (Greece), July 2018.
![]()
4. Juan M. Martín-Doñas, Ángel M. Gómez, Iván López-Espejo and Antonio M. Peinado: “Dual-channel DNN-based Speech Enhancement for Smartphones”. Proc. MMSP, Luton (UK), October 2017. ![]()
3. Juan M. Martín-Doñas, Iván López-Espejo, Carlos R. González-Lao, David Gallardo-Jiménez, Ángel M. Gómez, José L. Pérez-Córdoba, Victoria Sánchez, Juan A. Morales-Cordovilla and Antonio M. Peinado: “SecuVoice: A Spanish Speech Corpus for Secure Applications with Smartphones”. IberSPEECH, Lisbon (Portugal), November 2016.
![]()
2. Iván López-Espejo, José A. González, Ángel M. Gómez and Antonio M. Peinado: “DNN-Based Missing-Data Mask Estimation for Noise-Robust ASR in Dual-Microphone Smartphones”. UKSpeech, Norwich (UK), July 2015.
![]()
1. Iván López-Espejo, Ángel M. Gómez, José A. González and Antonio M. Peinado: “Feature Enhancement for Robust Speech Recognition on Smartphones with Dual-Microphone”. Proc. EUSIPCO, Lisbon (Portugal), September 2014.
![]()
Book chapters
2. Jorge Bachs Rubio, Ángel M. Gómez García, Antonio M. Peinado Herreros and Iván López Espejo: “Humming Composer para Android”. Book of Final Degree Projects, Dept. of STTC, 2014.
1. José L. Carmona, Ángel M. Gómez, José A. González, Ján Koloda, Domingo López, Iván López-Espejo, Juan A. Morales-Cordovilla, Antonio M. Peinado, José L. Pérez-Córdoba and Victoria Sánchez Calle: “Estimación MMSE en Aplicaciones de Transmisión Multimedia”. Juan Antonio Morente Chiquero: In Memoriam, University of Granada Editorial, 2013.
Patents
1. Iván López-Espejo, Santiago Prieto Calero, Ana Iriarte Ruiz, David Roncal Redín, Miguel Á. Sánchez Yoldi and Eduardo Azanza Ladrón: “Authenticating a User”. WO 2020/007495 A1, January 2020. ![]()
Others
1. Iván López-Espejo: “End-to-End Deep Residual Learning with Dilated Convolutions for Myocardial Infarction Detection and Localization”. arXiv:1909.12923, September 2019. ![]()
DOCTORAL THESIS:
1. Robust Speech Recognition on Intelligent Mobile Devices with Dual-Microphone
![]()
CODES:
10. Singing-voice to singing-voice translation from English to Japanese (PolySinger) ![]()
9. MMSEv for vocal effort-robust speaker verification ![]()
8. Perceptually-motivated loss function based on the intelligibility predictor STGI ![]()
7. Multi-task deep residual learning for joint KWS and external speaker detection ![]()
6. Improved dual-channel vector Taylor series (VTS) feature compensation ![]()
5. UT-based Dual-Channel Noise Estimator (2C-UTNE) ![]()
4. Dual-channel Spectral Weighting (DSW) ![]()
3. Dual-channel vector Taylor series (VTS) feature compensation ![]()
2. Minimum Mean Square Noise (MMSN) ![]()
1. Dual-Channel Spectral Subtraction (DCSS) ![]()
MISCELLANY:
16. Euronoise Summer School 2025 presentation “Forensic Acoustics: An Introduction to Voice Identification”. ![]()
15. Slides for the course “Introduction to Speech Technologies”. ![]()
14. Scientific presentation “Evaluating Speech Enhancement Systems Through Listening Effort: Next Steps”. ![]()
13. Hearing aid noisy speech dataset. ![]()
12. Invited talk at ODAI 2024: “AI in Speech Recognition and Voice Control”. ![]()
11. Tutorial at Interspeech 2022: “Deep Spoken Keyword Spotting”. ![]()
10. Poster “Deep Spoken Keyword Spotting”. ![]()
9. CASPR Summer School 2021 presentation “Voice Controlled Hearing Assistive Devices”. ![]()
8. Tools to generate a noisy version of the Google Speech Commands Dataset v2. ![]()
7. Setting up Kaldi with TensorFlow for speaker verification. ![]()
6. Scientific presentation “Low-Resource Keyword Spotting for Hearing Assistive Devices”. ![]()
5. Poster “Low-Resource Keyword Spotting for Hearing Assistive Devices”. ![]()
4. Manually-annotated speaker gender labels for the Google Speech Commands Dataset v2. ![]()
2. Seminar “Robust ASR on Mobile Devices with Small Array”. ![]()
1. Poster “Acoustic Noise Estimation for Robust Speech Recognition”. ![]()