Hi, I'm Candy! I am an assistant professor at School of Information Science, Japan Advanced Institute of Science and Technology (JAIST). My primary research interests are speech signal processing, machine learning, and social signal processing (SSP). My Ph.D. thesis focuses on privacy preservation and secure speech communication. In addition to primary research, I also carried out a research project on personality traits and communication skills modeling with Prof. Shogo Okada. I obtained my Ph.D. and M.S. degrees in Information Science, JAIST, advised by Prof. Masashi Unoki.
Research interests: speech information hiding, voice privacy, social signal processing, and machine learning.
News
We're excited to announce the publication of our latest research in the Journal of the Audio Engineering Society. This research explores the intersection of forensic science and speech technology aiming to ensure reliable understanding of poor-quality speech recordings used as evidence in criminal trials.
We successfully organized the 3rd ASEAN-IVO meeting and JAIST-ASEAN deepfake detection hub symposium. We welcomed participants from NICT, Japan and ASEAN institutions from Thailand, Indonesia, Brunei Darussalam, and Myanmar. The meetings were a resounding success, generating excitement and enthusiasm among all participants. The collaborative discussions and knowledge sharing fostered a vibrant atmosphere, laying the groundwork for future collaborations in spoof detection research.
Our paper Indonesian Speech Anti-Spoofing System: Data Creation and Convolutional Neural Network Models has been presented by our student, Ms. Sarah Azka Arief, at the 11th International Conference on Advanced Informatics: Concepts, Theory, and Applications (ICAICTA 2024), Singapore.
Our paper MAG-BERT-ARL for Fair Automated Video Interview Assessment has been accepted for publication in IEEE Access. This paper was the collaboration work between UI (Indonesia) and JAIST (Japan). This work was supported by the JST Sakura Science Exchange Program FY2023.
Our papers Unsupervised Anomalous Sound Detection Using Timbral and Human Voice Disorder-Related Acoustic Features and Anomalous Sound Detection Based on Time Domain Gammatone Filterbank and IDNN Model have been accepted for presentation at the 16th annual conference organized by Asia-Pacific Signal and Information Processing Association (APSIPA 2024). These papers were the collaboration work between ITB (Indonesia) and JAIST (Japan). This work was supported by the JST Sakura Science Exchange Program FY2023.
Our paper Detecting Spoof Voices in Asian Non-Native Speech: An Indonesian and Thai Case Study has been accepted for presentation at the 16th annual conference organized by Asia-Pacific Signal and Information Processing Association (APSIPA 2024).
Our papers UCSYSpoof: A Myanmar Language Dataset for Voice Spoofing Detection and Analysis of Pathological Features for Spoof Detection have been accepted for presentation at the 27th International Conference of the Oriental COCOSDA. These papers were the collaboration work between UCSY (Myanmar), NECTEC (Thailand), and JAIST (Japan). They were also a part of the ASEAN IVO project titled ‘Spoof Detection for Automatic Speaker Verification’(www.nict.go.jp/en/asean_ivo).
I participated in the 4th Symposium on Security and Privacy in Speech Communication and 3rd VoicePrivacy Challenge, Kos Island, Greece. This event is a satellite event of Interspeech. I served as a member of program committee and one of session chairs during the VPC presentations.
I participated in the Interspeech 2024, Kos Island, Greece. Our paper Are Recent Deep Learning-Based Speech Enhancement Methods Ready to Confront Real-World Noisy Environments? was presented as a poster presentation and received a lot of visitors from academia, industry, and others. I am very glad to be able to participate in this conference on-site for the first time!
Our paper Indonesian Speech Anti-Spoofing System: Data Creation and CNN Models has been accepted for presentation at the 11th International Conference on Advanced Informatics: Concepts, Theory, and Applications (ICAICTA 2024). This paper was the collaboration work between ITB (Indonesia) and JAIST (Japan).
One of my co-advise undergraduate student (Sarah Azka) from Institut Teknologi Bandung finished her final defense. お疲れ様でした!
Our paper about Forensic speech enhancement has been accepted for publication in the Journal of the Audio Engineering Society. This initiation introduces an innovative interdisciplinary project aiming to ensure reliable understanding of poor-quality speech recordings used as evidence in criminal trials.
Our paper about Forensic speech enhancement has been accepted for publication in the Journal of the Audio Engineering Society. This initiation introduces an innovative interdisciplinary project aiming to ensure reliable understanding of poor-quality speech recordings used as evidence in criminal trials.
Our paper Do We Need to Watch It All? Efficient Job Interview Video Processing with Differentiable Masking has been accepted for presentation at the 26th ACM International Conference on Multimodal Interaction (ICMI 2024).
Our application for the JAIST Grant for the establishment of an advanced research base (JAIST Science Hub)) 2024 (令和6年度先端研究拠点形成支援(JAISTサイエンスハブ構築支援)) was accepted.
Our application for the JAIST Grant for fundamental research FY2024 (令和6年度研究拠点形成支援事業(萌芽的研究)) was accepted.
Three of my co-advise undergraduate students (Primanda, Malik, and Rifqi) from Institut Teknologi Bandung finished their final defense. お疲れ様でした!
One of my co-advise undergraduate student (Bimasena) from Universitas Indonesia finished his final defense. お疲れ様でした!
Our paper Incremental Multimodal Sentiment Analysis on HAI Based on Multitask Active Learning with Inter-Annotator Agreement has been accepted for presentation at the 12th International Conference on Affective Computing and Intelligent Interaction (ACII 2024).
Our paper Are Recent Deep Learning-Based Speech Enhancement Methods Ready to Confront Real-World Noisy Environments? has been accepted for presentation at the 25th Interspeech Conference.