Ai Editorial: Biometric authentication – keeping it safe from hackers

21st February, 2020

Ai Editorial: Security safeguards and privacy-related initiatives are becoming stronger. Biometric authentication is an interesting tussle, and the industry is looking at negating fraudsters/ hackers’ moves, writes Ai’s Ritesh Gupta

 

Biometric authentication has numerous applications, and one of them is verifying/ authorizing a transaction.

Among all the options, facial recognition has gained traction because it is non-intrusive, easy to use and fast. It has gained prominence as it is being facilitated by our smartphones.

Since biometric authentication is about recognizing an individual without friction, rather than doing the same via a password or PIN, it stands out for augmenting the user experience with speed, ease of use and option to pay anywhere. But there are aspects that still need to be looked into. Be it for security-related risks, user privacy concerns or fraudulent transactions, repercussions are being probed at this juncture.

Plus, there are industry-related issues as well. For instance, this form of authentication does indicate that a cardholder himself or herself validated a transaction, but if the card network has no provision to use such data as the main proof, then that knowledge is useless.

Concerns

According to Gemalto, the efficacy of facial recognition systems is based on: false acceptance, false rejection and  true positive (this describes when an enrolled user is correctly matched to his or her profile. This number should be high.)

As for concerns, artificial intelligence (AI)-based identity fraud is emerging as a serious issue. What is coming under inspection is the efficacy of biometric security measure such as facial recognition. A primary concern that a section of the industry is highlighting is hackers/ fraudsters managing to steal people’s faces.  Recognition of one’s voices and face as a way to validate a person’s identity is under scrutiny with the rise of synthetic media and deepfakes. How damaging deepfakes can be, as they can perfectly imitate features of a person. Deepfakes are powered by deep learning AI. The algorithms behind this AI are fed large amounts of data. Eventually, by capitalizing on such data, “deepfake” videos manipulate audio and video using AI to make it appear as though someone did or said something they didn’t. It does pose a challenge to validating the legitimacy of information presented online.

As highlighted in one of Ai’s recent articles, initiatives are in the pipeline, focusing on automated deepfake detection. Identity verification specialist, Jumio emphasized that it is “vitally important to embed 3D liveness detection into identity verification and authentication processes”. The company is working on plans to combat advanced spoofing attacks including deepfakes. (It is important to know that not all liveness is created equal and many un-certified liveness detection solutions fall prey to deepfakes). Among the others, Facebook, too, last year was in news for working on a ‘de-identification’ technology to morph a person’s face so that they remain unrecognisable to facial recognition technology. Also, specialists are focusing on a certain kind of machine learning. In this type patterns in image data are spotted. It features a system of artificial neurons that copy the functioning of the human brain.

Companies like Apple acknowledge that much of our digital lives are stored on their devices, and it's important to protect that information . While technology in these devices can automatically alter modifications in one’s appearance, such as wearing cosmetic makeup or growing facial hair, the industry is also looking at areas like not unlocking with a sleeping face.  Also, these companies are using smarter technologies. For instance, Apple has highlighted that the camera of its devices captures accurate face data by projecting and analyzing over 30,000 invisible dots to create a depth map of face and also captures an infrared image of face. Also, each time a user unlocks their device, the camera identifies by securing precise depth data and an infrared image. This information is matched against the saved mathematical version to verify.

Earlier this year, Apple asserted that a random person looking at a user’s iPhone or iPad Pro and unlocking it using Face ID is approximately 1 in 1,000,000 with a single enrolled appearance. For more, read here.

 

Keen on exploring fraud prevention, data privacy and protection issues?

Check-out Ai’s conferences scheduled for 2020: https://lnkd.in/fE7UK_T