SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

The problem with biometrics and the key things to consider

Tue, 21st May 2024

People who are not all that knowledgeable about digital authentication often think biometrics are the answer to all our authentication problems. Many people think the Holy Grail of authentication is facial recognition or maybe even DNA analysis (when it gets here). They are not.

Biometrics (e.g. fingerprint, facial, iris, retina, veins, geometry, voice, keystrokes, cursor movements, etc.) can be a good form of authentication, but you must pick good implementations, and there are valid concerns no matter what biometric option you choose.

Here are some of the common issues with biometric authentication and key things to consider.

Accuracy
Most biometric vendors overstate their accuracy figures. The only thing that matters is how accurate the biometric solution is in practice in real-world conditions as deployed. 

In the US, the National Institutes of Standards and Technology (NIST) has been reviewing the accuracy of different biometric solutions (mostly fingerprint and facial) for years. Any biometric vendor or algorithm creator can submit their algorithm for review. Typically, NIST is looking for an accuracy goal around 1:100,000, meaning one error per 100,000 tests. So far, none of the submitted candidates come anywhere close. The best solutions have an error rate of 1.9%, meaning almost two mistakes for every 100 tests. 

I have been involved in many biometric deployments at scale, and we see far higher rates of errors (false-positives or false-negatives) than even what NIST sees in their best-case scenario, lab condition testing. Biometrics in the real world is a hard nut to crack. 

With that said, some biometric solutions are far more accurate than their competitors. There are solutions that rank at the top of their class and a bunch that rank at the bottom. If you are buying a biometric solution, try before you buy.

Security/Hacking
Anything can be hacked. Any biometric solution can be hacked. Any biometric vendor telling you different should be avoided. But some biometric solutions are more resilient than others. The tough part is telling the difference. Here is what I look for when looking to see if a particular biometric solution is more secure than its competitors:

  • Are the biometric solution developers trained in secure development lifecycle (SDL) programming? Most are not, but those who are will be more likely to deliver a more secure product.
  • Does the biometric vendor do in-house code reviews and penetration testing?
  • Does the biometric vendor hire external penetration testers and participate in bug bounties?
  • Is the solution resistant to man-in-the-middle attacks?
  • Does the solution store the biometric attributes of its users in their true image form or transform the captured biometric data into something else that will be less useful to hackers if stolen?
  • Is the solution single factor or multifactor authentication (MFA)? MFA is stronger.
  • Does the solution have above average accuracy as compared to its peers?

Biometric theft
One of the most challenging problems is what to do if your biometric attribute is stolen. For example, all ten of my fingerprints were stolen, along with 5.6 million other people, in the infamous June 2015 OPM data breach. How can any system that relies on my fingerprints truly know that who is submitting them is me?

Well, for one, it is better if biometric attributes are paired with a knowledge-based secret like a password or a PIN. An attacker with my fingerprints would also have to know my knowledge-based secret.

I also prefer biometric systems that do not store my biometric attributes in “plaintext” form, meaning I do not like any biometric system that takes my fingerprints (or face, retina, iris, etc.) and stores them as the real, complete image in their database. I want biometric systems that read my biometric attributes and then transform them into something the biometric system can store and use, but if stolen, it means nothing to the thief. 

Privacy issues and government intrusion
Many nations and businesses now store billions of fingerprints and faces. It may be to conduct legitimate law enforcement scenarios, but many privacy advocates wonder if any single entity having billions of people’s biometric attributes can lead to illegal abuse. 

Bias
Biometrics can have technical bias. This is a bias caused by the technology. For example, many studies have shown that biometric facial scanners have a harder time discerning people with different skin types due to how light reflects off that skin and the ability to recognise features and geometry.

Technical biases can develop due to other circumstances. For example, some people are born without fingerprints (it is called Adermatoglyphia), and some without voices or eyes. Face tattoos, glasses, masks and hair can also complicate facial recognition scans. Some labor-intensive jobs cause more “micro-abrasions”, which can cause problems with fingerprint scanners and so on. It is just good to be aware of these potential biases and to avoid or modify the solution when possible. 

Biometrics are a growing part of the digital authentication world. There are good biometric solutions and bad biometric solutions. Try to pick the more secure and more accurate solutions. Even then, no biometric solution is unhackable or perfect. The best any defender considering a biometric solution can do is to be aware and pick the best one they can.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X