
Your keystrokes can be stolen and your privacy violated by hackers listening in.
Typing on your keyboard may not be as private as you think. A new study from researchers in the UK demonstrates how keystroke sounds can be used to steal sensitive information.
The researchers trained a deep learning algorithm to identify keystrokes based solely on audio recordings. By analyzing subtle differences in keyboard sounds, the algorithm was able to predict typed text with 95% accuracy. Even over Zoom calls, the accuracy rate remained high at 93%.
This presents a troubling privacy risk. Passwords, messages, and other private text could potentially be decoded by anyone who can record audio of typing. With the proliferation of smartphones and other microphone-enabled devices, quality audio recordings are easier than ever to obtain.
To gather training data, the researchers typed a set of 36 different keys 25 times each on a MacBook Pro keyboard. The algorithm learned to associate certain keyboard sounds with specific keys. It was then tested on new recordings and proved adept at deciphering full sentences and passwords.
The study highlights the unanticipated threats arising from the steady advance of machine learning. As algorithms grow more sophisticated, new attack vectors emerge. The acoustic side-channel attack demonstrated here takes advantage of an abundant data source – audio recordings – that users likely don’t consider sensitive.
To protect against this kind of attack, the researchers recommend randomizing keyboard inputs, using biometrics over typed passwords, and masking keystroke sounds during voice and video calls.
As machine learning progresses, we may need to treat everyday behaviors, like typing, as potentially public data. Privacy norms and tools will need to evolve to keep pace with these emerging threats that can violate your privacy.