Taking on Compliance Concerns

Biometric authentication is already a fact of life for billions of people around the world. They routinely use their faces or fingerprints to activate smartphones, PCs or other personal devices. Hundreds of millions of people with smart speakers are delighted to find that their voice assistant is able to offer personalized responses based on the unique quality of their voices. Opus Research estimates that over half a billion people have already enrolled their voiceprints with their banks, brokers, tax offices and others in order to simplify and accelerate required authentication procedures and carry out their business on trusted communications links. 

Popularity already extends to the commercial space where, in 2019, Deloitte published results of an executive survey in which 52% of respondents indicated that they were planning or evaluating the use of biometric authentication. 61% of them expected to use it in contact centers; 59% integrated in a mobile app or chatbot; and 56% expected to integrate biometrics with a “voicebot.” 

Acceptance of biometrics to support secure or trusted access, interactions and transactions is not surprising. There has been a steady crescendo in efforts to replace time-consuming and inconvenient knowledge-based identifiers (KBIs), especially username/password combinations, as well as the even-more-time-consuming “challenge questions” with techniques that are more convenient and speedier. Biometric authentication, thanks to its popularity for device activation is a worthy candidate. 

Yet all of the forces militating toward more biometrics takes under the shadow of global concern surrounding “compliance” with prevailing laws, regulations or industry standards surrounding protection of privacy and personally identifiable information (PII). In terms of PII, they ask, “What could be more personal than a biometric template?” It is literally a digitized version of “something you are.” 

Proper Handling of “Sensitive Information”

Compliance with privacy parameters is, rightfully, an evergreen concern. Stories that Apple, Amazon and Google are capturing and storing voice biometrics data (voiceprints) without permission has been a magnet for law firms to step in on behalf a class of plaintiffs who, they believe, are entitled to significant damages. They invoke a wide range of privacy laws, but you most often hear of GDPR (General Data Privacy Regulations), which was passed in 2016 in the EU where it was fully enacted in 2018. 

GDPR puts biometric data into a “sensitive category of personal data.” Yet it is not the only set of rules that pertain to compliance. There are privacy regulations enforced or being legislated in over 60 countries worldwide In the United States we often hear of the state of Illinois’ Biometric Information Protection Act (BIPA), which was very far-sighted when it went into effect in 2008. It has been the basis of hundreds of class action lawsuits and is a foundation on which litigators continuously expand the definition of activities and processes that comprise “improper collection, use, storage, and dissemination of biometric data.” 

In a nutshell, GDPR and BIPA expressly prevent these special categories of personal data to be used in a process that “allows or confirms the unique identification of that natural person…” unless: 

  • consent has been given explicitly
  • biometric information is necessary for carrying out obligations of the controller or the data subject in the field of employment, social security and social protection law
  • it is essential to protect the vital interests of the individual and he/she is incapable of giving consent
  • it is critical for any legal claims
  • it is necessary for reasons of public interest in the area of public health. 

For companies that intend to implement voice-based authentication, the focus has been on “explicit consent.” The fact that over 500 million voiceprints have been enrolled is ample evidence that efforts to gain such consent have been largely successful. 

For active enrollment, that can be done either by live agent or in an IVR, the customer needs to actively enroll in a process of capturing a text-dependent phrase, like “my voice is my password.” Usually, the phrase needs to be repeated 3 times as part of a process for enrollment that encompasses explicit consent. 

For passive enrollment, which use a live conversation or recordings to build a text-independent voiceprints, consent can be obtained by using a “preamble” to each call stating that “this call is being recorded for training and security purposes, including creation of a biometric template to authenticate your voice in future calls.” Where regulations allow, such an “opt-out” strategy (where the customer is informed that his or her voice is being used for a specific set of reasons) has proven to be preferable because it achieves higher enrolment rates when compared to alternative “opt-in” methods in which live agents or prompts in an IVR seek explicit consent to create a voiceprint. 

In real life, consent can be obtained through many channels. Email, rich text messaging and even physical mailings have been used to obtain customer consent. When applicable regulations apply, opt-in customers will automatically create their voiceprints when they initiate a call into contact centers. 

Consent is just the beginning.

As a practical matter, the systems that support biometric authentication must include principles of “privacy by design”. Banks, in every instance, already have internal rules in place regarding data security. Under those rules, voiceprints must be treated in a way that it is as secure as any other personal data stored by the business. 

More recently, there has been global concern over “the right to be forgotten.” For example, when a person “quits” Facebook, he or she wants insurance that all personal information is removed. Likewise, there must be documented procedures that support a customer’s effort to have his or her voiceprints removed from customer records. 

General concern about leaving one’s voiceprint with multiple businesses is not grounded in reality. As Ori Akstein, NICE Authentication and Fraud Product Manager explains, "a voiceprint is a binary object that represents the characteristics of a speaker voice and does not include any personal information, content of calls or audio portions.” It can only be used by the voice biometrics engine for which it was created. If such a file were to be stolen, it could not be used outside of the system that created it. Regarding the need to comply with data privacy regulation he explains that “NICE authentication and fraud prevention solutions provide the ability to manage customers data and their voice print, including the option to delete voice print and any associated customer data, withstanding the regulatory need to allow consumers to opt out with the "right to be forgotten"   

Finally, Document Your Work

Given the current climate surrounding protection of personal privacy, it is highly likely that the geographic scope of prevailing regulations as well as the definition of modalities that they cover will continue to expand. In the U.S., for instance, virtually every state has pending biometric privacy legislation and it is reasonable to assume that the same will be true in every country in Europe, Latin America, Asia and Africa. In every case, such regulations have steadily added “behavioral” information to the category. 

Before implementing, current systems of authentication need to be analyzed and compared to the desired process. From day one, enterprises must document the steps they are taking to implement a voice-based authentication or fraud-prevention system. These documents should confirm that the methods for capturing, storing and using voiceprints are in compliance with prevailing laws and regulations.

Therefore, follow this checklist to ensure that your implementation is in the bounds of what is permitted: 

  1. Determining if and what biometric information is being collected: biometric information can include fingerprints, voiceprints, retinal/iris and facial scans.

    If it is determined the information collected qualifies as biometric, then steps (including disclosures and storage) to handle and safeguard this information are required.
  2. Biometric data storage policy: Have a clear written policy in place for handling biometric information. What is the duration and purpose for which the biometric information is being used? Policies should include how long biometric information will be kept and when it will be destroyed.
  3. Informed, explicit consent: How or in what form is informed consent obtained? What procedures are carried out over the phone? What other efforts are undertaken (E.g. mailing written information packets)
  4. Data safeguarding: Is the biometric information protected according to the same security protocols used for other types of personally identifiable information? Will it be stored internally or with a third-party vendor? Do contracts with third-party vendors that process or store biometric information address with specificity how vendors secure this data?
  5. State law compliance: Is the policyholder prepared to comply with the applicable state breach notification laws in the event a security breach affects employees’ biometric data?