Fake digital fingerprints created by artificial intelligence can fool fingerprint scanners on smartphones, according to new research, raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.
A recent paper by New York University and Michigan State University researchers detailed how deep learning technologies could be used to weaken biometric security systems. The research, supported by a United States National Science Foundation grant, won a best paper award at a conference on biometrics and cybersecurity in October.
Smartphone makers like Apple and Samsung typically use biometric technology in their phones so that people can use fingerprints to easily unlock their devices instead of entering a passcode. Hoping to add some of that convenience, major banks like Wells Fargo are increasingly letting customers access their checking accounts using their fingerprints.
But while fingerprint scanners may be convenient, researchers have found that the software that runs these systems can be fooled. The discovery is important because it underscores how criminals can potentially use cutting-edge AI technologies to do an end run around conventional cybersecurity.
The latest paper about the problem builds on previous research published last year by some of the same NYU and Michigan State researchers. The authors of that paper discovered that they could fool some fingerprint security systems by using either digitally modified or partial images of real fingerprints. These so-called MasterPrints could trick biometric security systems that only rely on verifying certain portions of a fingerprint image rather than the entire print.
One irony is that humans who inspect MasterPrints could immediately likely tell they were fake because they contained only partial fingerprints. Software, it turns out, could not.
In the new paper, the researchers used neural networks—the foundational software for data training—to create convincing looking digital fingerprints that performed even better than the images used in the earlier study. Not only did the fake fingerprints look real, they contained hidden properties undetectable by the human eye that could confuse some fingerprint scanners.
“很明显，如果将安全性设置调高，（欺骗攻击）成功率会降低，” 邦特拉杰表示。 “但也不太方便。”他补充道。（财富中文网）
Julian Togelius, one of the paper’s authors and an NYU associate computer science professor, said the team created the fake fingerprints, dubbed DeepMasterPrints, using a variant of neural network technology called “generative adversarial networks (GANs),” which he said “have taken the AI world by storm for the last two years.”
Researchers have used GANs to create convincing-looking but fabricated photos and videos known as “deep fakes,” which some lawmakers worry could be used to create fake videos and propaganda that the general public would think was true. For example, several researchers have described how they could use AI techniques to create fabricated videos of former President Barack Obama giving speeches that never took place, among other things.
AI-altered photos are also fooling computers, as MIT researchers showed last year when they created an image of a turtle that confused Google’s image-recognition software. The technology mistook the turtle for a rifle because it identified hidden elements embedded in the image that shared certain properties with an image of a gun, all of which were unnoticeable by the human eye.
With GANs, researchers typically use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software. Using thousands of publicly available fingerprint images, the researchers trained one neural network to recognize real fingerprint images, and trained the other to create its own fake fingerprints.
They then fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were, explained Philip Bontrager, a NYU PhD candidate in computer science who also worked on the paper. Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.
The researchers then fed the fake fingerprint images into fingerprint-scanning software sold by tech companies like Innovatrics and Neurotechnology to see if they could be fooled. Each time a fake fingerprint image tricked one of the commercial systems, the researchers were able to improve their technology to produce more convincing fakes.
The neural network responsible for creating the bogus images embeds a random set of computer code that Bontrager referred to as “noisy data” that can fool fingerprint image recognition software. Although the researchers were able to calibrate this “noisy data” to trip the fingerprint software using what’s known as an evolutionary algorithm, it’s unclear what this code does to the image, since humans are unable to see its impact.
To be sure, criminals face a number of obstacles cracking fingerprint scanners. For one, many fingerprint systems rely on other security checks like heat sensors that are used to detect human fingers, Bontrager explained.
But, these newly developed DeepMasterPrints show that AI technology can be used for nefarious purposes, which means that cybersecurity, banks, smartphone makers and other firms using biometric technology must constantly improve their systems to keep up with the rapid AI advances.
Togelius said that prior to the paper, researchers didn’t consider the possibility of AI-created fake images to be a “serious threat to biometric systems.” After its publication, he said unspecified “large companies” are contacting him to learn more about the possible security threats of fake fingerprints.
Dr. Justas Kranauskas, a research and development manager for Neurotechnology, the maker of fingerprint sensor software, told Fortune in an email that the recent research paper about fooling fingerprint readers “touched” on an important point. But he pointed out that his company uses other kinds of security that the researchers did not incorporate into their study that would, as he put it, ensure a “very low false acceptance risk in real applications.”
Kranauskas also said that the Neurotechnology recommends that its corporate customers set their fingerprint scanning software at a higher security level than the levels that researchers used in their paper.
Bontrager, the researcher, noted, however, that the higher the fingerprint security level, the less convenient it is for users, because companies typically want some leeway so that customers don’t have to repeatedly press their fingers on scanners to get accurate reads.
“So obviously, if you choose a high security setting, [spoofing attacks] are less successful,” Bontrager said. “But then it is less convenient,” he added.