AI Model Listens to Typing, Potentially Compromising Sensitive Data

Revolutionary AI Model Predicts Keystrokes Through Sound: A New Wave of Acoustic Attacks.

  1. Cornell researchers unveil AI model decoding keyboard inputs from audio signals.
  2. New attack predicts keystrokes by analyzing acoustic signatures of different keys.
  3. Implications for data security and emergence of acoustic cyberattacks raise concerns.
  4. AI model achieves 93% accuracy, setting a new record for audio-based classification systems.
  5. Countermeasures like noise introduction can reduce accuracy, limiting widespread malicious use.

Researchers at Cornell University have unveiled a groundbreaking deep-learning model capable of deciphering keyboard input solely from audio signals. This pioneering advancement, led by Joshua Harrison, Ehsan Toreini, and Maryam Mehrnezhad, has significant implications for data security and the emergence of acoustic cyberattacks.

In a recently published paper titled “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards,” the Cornell team introduced an Artificial intelligence (AI) model capable of accurately predicting keystrokes by analyzing the unique acoustic signatures produced by different keys on a keyboard.

The technique involves training the model to associate specific audio patterns with corresponding characters, allowing it to virtually ‘listen’ to typing and transcribe it with astonishing accuracy.

Traditional cyberattacks often exploit software vulnerabilities or rely on phishing tactics, but this new wave of attacks leverages the physical characteristics of keyboards, emphasizing the significance of sound as a potential security vulnerability. The implications are far-reaching, as this method could compromise user passwords, conversations, messages, and other sensitive information.

The team’s research indicates that the model achieves an impressive accuracy rate of 93% when trained using Zoom recordings, marking a new record for audio-based classification systems. The process of training the model involves exposing it to multiple instances of each keystroke on a specific keyboard. The researchers utilized a MacBook Pro, pressing each of its 36 keys 25 times to develop a comprehensive dataset for training.

AI Model Listens to Typing, Potentially Compromising Sensitive Data
Screenshot (Arxiv)

Despite its remarkable potential, the AI model does come with certain limitations and vulnerabilities. Changing typing styles or utilizing touch typing can significantly reduce the model’s accuracy, dropping it to a range of 40% to 64%. Additionally, countermeasures like introducing noise to the audio signal can obfuscate the keystrokes and diminish the model’s accuracy.

However, the researchers are clear that the model’s effectiveness is dependent on the specific keyboard’s sound profile. This dependence restricts the applicability of the attack to keyboards with similar acoustic characteristics, limiting its scope for widespread malicious use.

As the digital landscape evolves, the arms race between cyberattacks and defence measures continues to escalate. The development of AI-based acoustic side-channel attacks underscores the need for enhanced security measures, including innovative noise-cancellation solutions like NVIDIA’s RTX Broadcast, which can counteract these types of attacks.

For a comprehensive exploration of the Cornell team’s findings and methodologies, readers can refer to the official research paper (PDF) available for further study. As the boundaries of AI and cybersecurity continue to blur, understanding these advancements is crucial for individuals and organizations to stay ahead of potential threats.

  1. AI-based Model to Predict Extreme Wildfire Danger
  2. Stealing data from air-gapped PC by turning RAM into Wi-Fi Card
  3. Locating malicious drone operators through deep neural networks
  4. Hackers Can Now Steal Data from Air-Gapped PCs via SATA Cables
  5. Malware can extract data from air-gapped PC through power supply
Total
0
Shares
Related Posts