Secure Speech Communication



In the digital age, where information is readily available, the threat of fake or manipulated content is becoming increasingly concerning. Experts predict that in the next six years, more than 90% of all digital content will have been manipulated to some degree. This alarming statistic highlights the need for effective solutions to combat the rising threat of fake information. Manipulated content has the potential to cause severe damage, whether it is used for propaganda, disinformation, or cybercrime. The implications of this are far-reaching, with the potential to impact public opinion, disrupt democracy, and damage reputations.

Despite advancements in detection techniques, current technology is only able to detect about 65% of fake information, with the remaining 35% slipping through the internet undetected. This represents a significant challenge, as the technology required to create and distribute fake information is becoming increasingly accessible. The ability to create high-quality, convincing fake information with ease is a growing concern, and as technology continues to evolve, the challenge of detecting fake information will become even greater. This is particularly true when it comes to speech signals, which can be manipulated to create fake audio recordings or deepfakes that are difficult to distinguish from genuine recordings.

Given the potential harm that can be caused by fake information, it is crucial to develop effective solutions to combat this threat. This research aims to propose solutions specifically for the protection and detection of manipulated speech signals, which are increasingly being used to spread fake information. Detecting fake speech signals presents unique challenges, as these signals can be highly complex, making them difficult to detect using traditional methods. Furthermore, as deepfake technology improves, the challenge of detecting fake speech signals is only likely to increase. This research aims to address these challenges by proposing innovative solutions that can detect and prevent the spread of fake speech signals, ultimately contributing to the development of a safer digital landscape.


Related Publications


  1. Forensic speech enhancement: Toward Reliable Handling of Poor-Quality Speech Recordings Used as Evidence in Criminal Trials
    Helen Fraser, Vincent Aubanel, Robert C. Maher, Candy Olivia Mawalim, Xin Wang, Peter Počta, Emma Keith, Gérard Chollet, and Karla Pizzi.

    Journal of the Audio Engineering Society, 2024 (Accepted).

    This paper proposes an innovative interdisciplinary approach to evaluating the effectiveness of forensic speech enhancement (FSE). FSE faces unique challenges arising from a range of factors, from poor recording quality, highly variable conditions from case to case, and content uncertainty. Despite these difficulties, FSE is commonly admitted in court, and can significantly influence the outcome of criminal trials. Current FSE practices are hindered by unrealistic expectations from courts, which often assume that enhanced audio inherently clarifies content. In fact, FSE can have the undesired opposite effect, potentially resulting in unfair prejudice, when, for example, it increases the credibility of a misleading transcript. The proposed interdisciplinary project advocates for a better consideration of speech perception factors, particularly those related to transcription. It aims to bridge the gap between FSE and forensic transcription by promoting a combined approach to enhancing and accurately transcribing forensic audio. By developing a position statement on FSE capabilities, the project seeks to establish realistic standards and foster collaboration among researchers and practitioners. This effort aims to ensure reliable, accountable forensic audio evidence, aligning with forensic science standards and improving the effectiveness of the justice system.
  2. Detecting Spoof Voices in Asian Non-Native Speech: An Indonesian and Thai Case Study.
    Aulia Adila, Candy Olivia Mawalim, and Masashi Unoki

    The 16th annual conference organized by Asia-Pacific Signal and Information Processing Association (APSIPA 2024), Galaxy International Convention Center, Macau, China, 3-6 Dec 2024. (Accepted)

    TBD
  3. Analysis of Pathological Features for Spoof Detection.
    Myat Aye Aye Aung, Hay Mar Soe Naing, Aye Mya Hlaing, Win Pa Pa, Kasorn Galajit, and Candy Olivia Mawalim

    The 27th International Conference of the Oriental COCOSDA (O-COCOSDA 2024), National Yang Ming Chiao Tung University, Hsinchu, Taiwan, 17-19 Oct 2024. (Accepted)

    TBD
  4. UCSYSpoof: A Myanmar Language Dataset for Voice Spoofing Detection.
    Hay Mar Soe Naing, Win Pa Pa, Aye Mya Hlaing, Myat Aye Aye Aung, Kasorn Galajit, and Candy Olivia Mawalim

    The 27th International Conference of the Oriental COCOSDA (O-COCOSDA 2024), National Yang Ming Chiao Tung University, Hsinchu, Taiwan, 17-19 Oct 2024. (Accepted)

    TBD
  5. Indonesian Speech Anti-Spoofing System: Data Creation and CNN Models.
    Sarah Azka Arief, Candy Olivia Mawalim, and Dessi Puji Lestari

    The 11th International Conference on Advanced Informatics: Concepts, Theory, and Applications (ICAICTA 2024), Singapore, 28-30 Sept 2024.

    Biometric systems are prone to spoofing attacks. While research in speech anti-spoofing has been progressing, there is a limited availability of diverse language datasets. This study aims to bridge this gap by developing an Indonesian spoofed speech dataset, which includes replay attacks, text-to- speech, and voice conversion. This dataset forms the foundation for creating an Indonesian speech anti-spoofing system. Subsequently, light convolutional neural network (LCNN) and residual network (ResNet) models, based on convolutional neural networks (CNN), were developed to evaluate the dataset. The input features used are linear frequency cepstral coefficients (LFCC). Both models demonstrate remarkably low minDCF and EER scores approaching zero. The results also exhibit exceptional scores with 4-fold cross validation, showing strong initial performance with no signs of overfitting. However, models trained solely on Common Voice or Prosa.ai datasets performed poorly in cross-source tests, suggesting generalization issues due to a lack of diversity in the dataset. This highlights the need for further improvement and continued research in Indonesian speech spoof detection.
  6. Study on Inaudible Speech Watermarking Method Based on Spread-Spectrum Using Linear Prediction Residue.
    Aulia Adila, Candy Olivia Mawalim, Takuto Isoyama, and Masashi Unoki.

    Journal of Signal Processing, Research Institute of Signal Processing, 2024, Volume 28 Issue 6 Pages 309-313.

    A reliable speech watermarking technique must balance satisfying four requirements: inaudibility, robustness, blind detectability, and confidentiality. A previous study proposed a speech watermarking technique based on direct spread spectrum (DSS) using a linear prediction (LP) scheme, i.e., LP-DSS, that could simultaneously satisfy these four requirements. However, an inaudibility issue was found due to the incorporation of a blind detection scheme with frame synchronization. In this paper, we investigate the feasibility of utilizing a psychoacoustical model, which simulates auditory masking, to control the suitable embedding level of the watermark signal to resolve the inaudibility issue in the LP-DSS scheme. Evaluation results confirmed that controlling the embedding level with the psychoacoustical model, with a constant scaling factor setting, could balance the trade-off between inaudibility and detection ability with a payload up to 64 bps.
  7. ThaiSpoof: A Database for Spoof Detection in Thai Language.
    Kasorn Galajit, Thunpisit Kosolsriwiwat, Candy Olivia Mawalim, Pakinee Aimmanee, Waree Kongprawechnon, Win Pa Pa, Anuwat Chaiwongyen, Teeradaj Racharak, Hayati Yassin, Jessada Karnjana, Surasak Boonkla, and Masashi Unoki.

    The 18th International Joint Symposium on Artificial Intelligence and Natural Language Processing and The International Conference on Artificial Intelligence and Internet of Things (iSAI-NLP 2023).

    Many applications and security systems have widely applied automatic speaker verification (ASV). However, these systems are vulnerable to various direct and indirect access attacks, which weakens their authentication capability. The research in spoofed speech detection contributes to enhancing these systems. However, the study in spoofing detection is limited to only some languages due to the need for various datasets. This paper focuses on a Thai language dataset for spoof detection. The dataset consists of genuine speech signals and various types of spoofed speech signals. The spoofed speech dataset is generated using text-to-speech tools for the Thai language, synthesis tools, and tools for speech modification. To showcase the utilization of this dataset, we implement a simple model based on a convolutional neural network (CNN) taking linear frequency cepstral coefficients (LFCC) as its input. We trained, validated, and tested the model on our dataset referred to as ThaiSpoof. The experimental result shows that the accuracy of model is 95%, and equal error rate (EER) is 4.67%. The result shows that our ThaiSpoof dataset has the potential to develop for helping in spoof detection studies.
  8. Voice Contribution on LFCC feature and ResNet-34 for Spoof Detection.
    Khaing Zar Mon, Kasorn Galajit, Candy Olivia Mawalim, Jessada Karnjana, Tsuyoshi Isshiki, and Pakinee Aimmanee.

    The 18th International Joint Symposium on Artificial Intelligence and Natural Language Processing and The International Conference on Artificial Intelligence and Internet of Things (iSAI-NLP 2023).

    Recently, biometric authentication has been significant advancement, particularly in speaker verification. While there have been significant advancements in this technology, compelling evidence highlights the continued vulnerability of this technology to malicious spoofing attacks. This vulnerability calls for developing specialized countermeasures to identify various attack types. This paper specifically focuses on detecting replay, speech synthesis, and voice conversion attacks. As our spoof detection strategy’s front-end feature extraction method, we used linear frequency cepstral coefficients (LFCC). We utilized ResNet-34 to classify between genuine and fake speech. By integrating LFCC with ResNet-34, We evaluated the proposed method using the ASVspoof 2019 dataset, PA (Physical Access), and LA (Logical Access). In our approach, we compare the use of the entire utterance for feature extraction in both PA and LA datasets with an alternative method that involves extracting features from a specific percentage of the voice segment within the utterance for classification. In addition, we conducted a comprehensive evaluation by comparing our proposed method with the established baseline techniques, LFCC-GMM and CQCC-GMM. The proposed method demonstrates promising performance, achieving equal error rate (EER) of 3.11% and 3.49% for the development and evaluation datasets, respectively, in PA attacks. In LA attacks evaluation, the proposed method performs EER of 0.16% and 6.89% for the development and evaluation datasets, respectively. The proposed method shows promising results in identifying spoof attacks for both PA and LA attacks.
  9. Analysis of Spectro-Temporal Modulation Representation for Deep-Fake Speech Detection.
    Haowei Cheng, Candy Olivia Mawalim, Kai Li, Lijun Wang, and Masashi Unoki.

    The 15th Asia-Pasific Signal and Information Processing Association (APSIPA ASC 2023), Taipei, Taiwan, 31 October - 3 November 2023.

    Deep-fake speech detection aims to develop effective techniques for identifying fake speech generated using advanced deep-learning methods. It can reduce the negative impact of malicious production or dissemination of fake speech in real-life scenarios. Although humans can relatively easy to distinguish between genuine and fake speech due to human auditory mechanisms, it is difficult for machines to distinguish them correctly. One major reason for this challenge is that machines struggle to effectively separate speech content from human vocal system information. Common features used in speech processing face difficulties in handling this issue, hindering the neural network from learning the discriminative differences between genuine and fake speech. To address this issue, we investigated spectro-temporal modulation representations in genuine and fake speech, which simulate the human auditory perception process. Next, the spectro-temporal modulation was fit to a light convolutional neural network bidirectional long short-term memory for classification. We conducted experiments on the benchmark datasets of the Automatic Speaker Verification and Spoofing Countermeasures Challenge 2019 (ASVspoof2019) and the Audio Deep synthesis Detection Challenge 2023 (ADD2023), achieving an equal-error rate of 8.33\% and 42.10\%, respectively. The results showed that spectro-temporal modulation representations could distinguish the genuine and deep-fake speech and have adequate performance in both datasets.
  10. F0 Modification via PV-TSM Algorithm for Speaker Anonymization Across Gender.
    Candy Olivia Mawalim, Shogo Okada, and Masashi Unoki.

    2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Chiang Mai, Thailand, 7--10 November 2022.

    Speaker anonymization has been developed to protect personally identifiable information while retaining other encapsulated information in speech. Datasets, metrics, and protocols for evaluating speaker anonymization have been defined in the Voice Privacy Challenge (VPC). However, existing privacy metrics focus on evaluating general speaker individuality anonymization, which is represented by an x-vector. This study aims to investigate the effect of anonymization on the perception of gender. Understanding how anonymization caused gender transformation is essential for various applications of speaker anonymization. We proposed speaker anonymization methods across genders based on phase-vocoder time-scale modification (PV-TSM). Subsequently, in addition to the VPC evaluation, we developed a gender classifier to evaluate a speaker's gender anonymization. The objective evaluation results showed that our proposed method can successfully anonymize gender. In addition, our proposed methods outperformed the signal processing-based baseline methods in anonymizing speaker individuality represented by the x-vector in ASVeval while maintaining speech intelligibility.

  11. Speaker Anonymization by Pitch Shifting Based on Time-Scale Modification.
    Candy Olivia Mawalim, Shogo Okada, and Masashi Unoki.

    2nd Symposium on Security and Privacy in Speech Communication joined with 2nd VoicePrivacy Challenge Workshop September 23 & 24 2022, as a satellite to Interspeech 2022, Incheon, Korea.

    The increasing usage of speech in digital technology raises a privacy issue because speech contains biometric information. Several methods of dealing with this issue have been proposed, including speaker anonymization or de-identification. Speaker anonymization aims to suppress personally identifiable information (PII) while keeping the other speech properties, including linguistic information. In this study, we utilize time-scale modification (TSM) speech signal processing for speaker anonymization. Speech signal processing approaches are significantly less complex than the state-of-the-art x-vector-based speaker anonymization method because it does not require a training process. We propose anonymization methods using two major categories of TSM, synchronous overlap-add (SOLA)-based algorithm and phase vocoder-based TSM (PV-TSM). For evaluating our proposed methods, we utilize the standard objective evaluation introduced in the VoicePrivacy challenge. The results show that our method based on the PV-TSM balances privacy and utility metrics better than baseline systems, especially when evaluating with an automatic speaker verification (ASV) system in anonymized enrollment and anonymized trials (a-a). Further, our method outperformed the x-vector-based speaker method, which has limitations in its complex training process, low privacy in an a-a scenario, and low voice distinctiveness.

  12. Speaker Anonymization by Modifying Fundamental Frequency and X-Vectors Singular Value.
    Candy Olivia Mawalim, Kasorn Galajit, Jessada Karnjana, Shunsuke Kidani, and Masashi Unoki.

    Computer Speech & Language, Elsevier, vol. 73, 101326, 2022.

    Speaker anonymization is a method of protecting voice privacy by concealing individual speaker characteristics while preserving linguistic information. The VoicePrivacy Challenge 2020 was initiated to generalize the task of speaker anonymization. In the challenge, two frameworks for speaker anonymization were introduced; in this study, we propose a method of improving the primary framework by modifying the state-of-the-art speaker individuality feature (namely, x-vector) in a neural waveform speech synthesis model. Our proposed method is constructed based on x-vector singular value modification with a clustering model. We also propose a technique of modifying the fundamental frequency and speech duration to enhance the anonymization performance. To evaluate our method, we carried out objective and subjective tests. The overall objective test results show that our proposed method improves the anonymization performance in terms of the speaker verifiability, whereas the subjective evaluation results show improvement in terms of the speaker dissimilarity. The intelligibility and naturalness of the anonymized speech with speech prosody modification were slightly reduced (less than 5% of word error rate) compared to the results obtained by the baseline system.

  13. Speech Watermarking by McAdams Coefficient Scheme Based on Random Forest Learning.
    Candy Olivia Mawalim, and Masashi Unoki.

    Entropy, MDPI, vol. 23, no. 10, 2021.

    Speech watermarking has become a promising solution for protecting the security of speech communication systems. We propose a speech watermarking method that uses the McAdams coefficient, which is commonly used for frequency harmonics adjustment. The embedding process was conducted, using bit-inverse shifting. We also developed a random forest classifier, using features related to frequency harmonics for blind detection. An objective evaluation was conducted to analyze the performance of our method in terms of the inaudibility and robustness requirements. The results indicate that our method satisfies the speech watermarking requirements with a 16 bps payload under normal conditions and numerous non-malicious signal processing operations, e.g., conversion to Ogg or MP4 format.

  14. Improving Security in McAdams Coefficient-Based Speaker Anonymization by Watermarking Method.
    Candy Olivia Mawalim, and Masashi Unoki.

    2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Tokyo, Japan, December 2021.

    Speaker anonymization aims to suppress speaker individuality to protect privacy in speech while preserving the other aspects, such as speech content. One effective solution for anonymization is to modify the McAdams coefficient. In this work, we propose a method to improve the security for speaker anonymization based on the McAdams coefficient by using a speech watermarking approach. The proposed method consists of two main processes: one for embedding and one for detection. In embedding process, two different McAdams coefficients represent binary bits "0" and "1". The watermarked speech is then obtained by frame-by-frame bit inverse switching. Subsequently, the detection process is carried out by a power spectrum comparison. We conducted objective evaluations with reference to the VoicePrivacy 2020 Challenge (VP2020) and of the speech watermarking with reference to the Information Hiding Challenge (IHC) and found that our method could satisfy the blind detection, inaudibility, and robustness requirements in watermarking. It also significantly improved the anonymization performance in comparison to the secondary baseline system in VP2020.

  15. X-Vector Singular Value Modification and Statistical-Based Decomposition with Ensemble Regression Modeling for Speaker Anonymization System.
    Candy Olivia Mawalim, Kasorn Galajit, Jessada Karnjana, and Masashi Unoki.

    Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, pp. 1703–1707, October 2020.

    Anonymizing speaker individuality is crucial for ensuring voice privacy protection. In this paper, we propose a speaker individuality anonymization system that uses singular value modification and statistical-based decomposition on an x-vector with ensemble regression modeling. An anonymization system requires speaker-to-speaker correspondence (each speaker corresponds to a pseudo-speaker), which may be possible by modifying significant x-vector elements. The significant elements were determined by singular value decomposition and variant analysis. Subsequently, the anonymization process was performed by an ensemble regression model trained using x-vector pools with clustering-based pseudo-targets. The results demonstrated that our proposed anonymization system effectively improves objective verifiability, especially in anonymized trials and anonymized enrollments setting, by preserving similar intelligibility scores with the baseline system introduced in the VoicePrivacy 2020 Challenge.