FANDOM


Welcome to the Facial Recognition Systems WikiEdit

An overview of facial recognition systems and the benefits and risks of this popular authentication tool.

IntroductionEdit

Facial recognition software has seen major improvements since it first emerged in the tech industry 57 years ago (DuVal, n.d.) and today, it holds huge amounts of potential in an even wider number of different environments. In this day and age, facial recognition systems are being finely tuned to enhance our abilities to identify others for criminal investigations, to enable people to securely order various products using only their faces, and even to gain access to locations or tech devices for security purposes. The market for this kind of recognition system is growing every day with each new advancement. However, as tempting as this highly convenient method of identification is, it is important to consider all the facets of this technology. Before government, tech companies and consumers embrace these facial recognition systems, they should consider not only the positive aspects of it, including convenience and security, but the negative ones as well, including the sluggish legal system, the ethical and security requirements regarding safekeeping of those databases, and the privacy-related concerns that come with many layers of systems working to identify consumers.

Background Edit

The goal of building a database of facial features for identification has been a work in progress since the 1960’s (DuVal, n.d.). It really got it’s kickstart when the Department of Defense Counterdrug Technology Development Program Office sponsored the Face Recognition Technology (FERET) program in 1993. According to the National Institute of Standards and Technology (NIST) (Face Recognition Technology, 2011):

The FERET program consisted of three phases, each one year in length. The goals of the first phase were to establish the viability of automatic face recognition algorithms and to establish a performance baseline against which to measure future progress. The goals of phases 2 and 3 were to further develop face recognition technology.

When the FERET program finished up in 1997, facial recognition software was only in use in universities and research labs. Building on the FERET conclusions, the NIST executed the Face Recognition Vendor Test (FRVT) in 2000 to evaluate the viability and capabilities of the emerging commercial facial recognition software. According to the NIST (Face Recognition Vendor Test, 2010):

FRVT 2000 consisted of two components: the Recognition Performance Test and the Product Usability Test. The Recognition Performance Test was a technology evaluation. The goal of the Recognition Performance Test was to compare competing techniques for performing facial recognition. All systems were tested on a standardized database. The standard database ensured all systems were evaluated using the same images, which allowed for comparison of the core face recognition technology. The product usability test examined system properties for performing access control.

Since the FRVT in 2000, the FRVT has been refined to more accurately determine the performance of various commercial facial recognition software as they too become more complex and refined.

Benefits Edit

Facial recognition can be a very powerful identification tool. The face has a whole host of unique features that can be used enable identification of individuals for law enforcement applications, as well as security and commercial applications. “Face recognition has been widely adopted because the face has a significant role in conveying an individual's identity in social interaction; it is not hidden; and recognizing it requires neither advanced hardware nor physical contact.” Akhtar & Rattani, 2017) As a result of this convenient software, its usage has been spreading rapidly across a variety of very different agencies and business. As explained by Akhtar and Rattani, (2017),

For example, FRSs are part of the US Visitor and Immigrant Status Indicator Technology (US-VISIT), a US Customs and Border Protection management system and of the (Unique Identification Authority of India (UIDA), which issues unique ID numbers to all Indian residents. Pervasive software, such as Microsoft Windows 10 and Kinect, use face recognition when users attempt to access the dashboard and automatically login to a profile, such as that on Xbox Live. The Toshiba YL863 TV uses face biometrics to provide customized, automatic, and advanced use settings, while the Sony HX920 TV uses face biometrics to sound an alert when viewing distance is too short or to power off the TV when it detects the absence of a viewer. Face biometrics are also used ubiquitously as a password alternative on some mobile devices. Examples include the Android KitKat mobile OS, Lenovo VeriFace, Asus Smart-Logon, and Toshiba SmartFace.

The applications reach well beyond those previously listed. A major leading Chinese startup company called Face++ (face plus plus) uses their software to grant access to their flagship office in Beijing. Their software is utilized to transfer money through an app called Alipay, using only facial credentials. Didi, a dominant ride-hailing company in China allows users to confirm that the person behind the wheel is a legitimate driver using Face++ software. Even local governments are utilizing Face++ to identify suspected criminals in video surveillance systems around the country. (Knight, 2017)

Beyond convenience, biometrics-based credentials are deemed to be much safer than traditional passwords. These credentials can range from several hundred bytes to over a megabyte’s worth of data. For passwords to compete, they would quickly run into usably issues. To compete with the byte strength of a biometric credential, passwords would have to match the same number of bytes, and it would be almost impossible to remember such a lengthy passphrase. On top of that, usability further declines because of the ridiculous amount of time it would take to type in that passphrase. (Ratha, Connell & Bolle, 2001, p. 615)

Legal & Ethical Concerns Edit

With the advancement of technology moving at an almost alarming, breakneck speed, our society has been put to task to find ways to properly handle the new influx of data collection. While society has privacy laws in place to protect traditional personal information, we lack rules and laws to protect and guide new collections, usage, and decisions regarding this new sensitive data. According to Richards and King (2014), there are four values that should guide the formation of these necessary laws: privacy, confidentiality, transparency, and identity. Richards and King (2014) argue that the conversations about privacy should start with the hot topic of the collection of personal data, but, as they often do, they shouldn’t end there. (p. 408-9)

In as early as January, 1999, privacy was declared dead by Sun Microsystems CEO Scott McNealy. That mindset has been repeated over and over as we advance the scope of our data collection efforts. However, Richards and King (2014) argue that with the Snowden leaks, the privacy debates have been sparked again. Why debate if privacy is dead anyway? They argue,

If we think about privacy as the amount of information we can keep secret or unknown, then that kind of privacy is certainly shrinking. We are living through an information revolution, and the collection, use, and analysis of personal data is inevitable. But if we think about privacy as the question of what rules should govern the use of personal information, then privacy has never been more alive. (p. 410)

Richards and King (2014) go on to argue that the word “privacy” has become shorthand for the more relevant phrase, “information rules.” (p. 411) It is critical that we continue to add on more and more of these information rules to continue to protect user data, in this case, the very intimate detailing of their faces, and their ability to dictate acceptable usage of that data as technology continues to change.

When considering confidentiality, Richards and King (2014) outline the issues beautifully:

With the power of big data to make secondary uses of the private information we share in confidence, restoration of trust in the institutions we share with rests not only with privacy but in recognition that shared private information can remain “confidential.” In other words, private digital information that we share with third parties we trust can still be regulated by privacy law. (p. 413-4)

As society readily accepts the convenience of using their faces as authentication credentials, they must consider that this sensitive data may enable data analysts to find new, secondary uses for that data. If confidentiality guidelines and rules are not set in place at a rate to match the growth of the emerging changes in facial recognition software, the ability to manage user’s own data moves from their hands to big data’s. As this technology advances, it is critical for companies to live up to the level of trust that users are required to put into them, and to have repercussions in place to hold them accountable.

Another critical value that must be taken into consideration is transparency. According to Richards and King (2014), “transparency, like confidentiality, also fosters trust by being able to hold others accountable. Transparency of government information plays a crucial role in ensuring constitutional checks and balances among the branches of government, a free press, and individual citizens.” (p. 419) Transparency, regarding the data collected by these facial recognition systems, is another check for the users to ensure that their facial data is being used only for the purposes related to the applicable system they are using. Without this expectation and enforcement of confidentiality, once again, data management expectations will be set by self-serving companies and government.

The last value that must be considered is the concept of identity. Richards and King (2014), once again, nail the finer points of the issue:

Big data requires us also to think more deeply about identity. Identity, like privacy, is hard to define but equally vital to protect. Whereas privacy harkens from the right to be let alone, identity hails from the fundamental right to define who we are. Protecting privacy, especially intellectual privacies, helps protect identity by giving individuals room to make up their own minds. Yet privacy protections are not enough in our new age of the big metadata computer because big data analytics can compromise identity by allowing institutional surveillance to moderate and even determine who we are before we make up our own minds. Therefore, we are concerned that big data can compromise identity and believe that, in addition to privacy and confidentiality protections, we must begin to think about the kinds of big data predictions and inferences that we will allow and the ones that we should not. (p. 422)

As more detailed facial data becomes integrated into big data, it isn’t hard to imagine that facial data can and will be linked into those inferences. Thinking broadly, Richards and King (2014) argue that, in the name of protecting and serving us, institutions work to identify and categorize everyone. As surveillance programs accumulate detailed records of our daily lives, the government’s power to discriminate, coerce, or selectively target critics grows. (p. 424) Imagine, with the power to see individual’s facial features (and ethnicity, most notably) and then connect the dots to see what sort of buildings, apps, or services that individual utilizes, agencies can easily infer who the individual is and discriminate in a whole host of ways against them. That sort of practice is decidedly unethical, and measures absolutely must be put into place to deny that behavior from taking root.

Security Concerns Edit

Building large, secure databases that contain sensitive authentication data is certainly not a new problem in the era of big data that we are in today. However, as the use of biometrics-based authentication takes off, it is critical to ensure that these systems are designed to be secure against attacks and leaks. According to Ratha et al., (2001), “the consequences of an insecure authentication system in a corporate or enterprise environment can be catastrophic, and may include loss of confidential information, denial of service, and compromised data integrity.” (p. 614) Considering the very intimate usage of facial recognition for authentication in ecommerce, banking, and access control, the user consequences of a data leak or loss of access could be devastating. If facial data falls into the wrong hands, fraud can be taken to an entirely different level, simply because of the perceived security around facial credentials.

For how much more secure biometric credentials are over passwords, hackers will always be on the lookout for and will find weak points in the system. According to Ratha et al., (2001),

Password systems are prone to brute force dictionary attacks. Biometric systems, on the other hand, require substantially more effort for mounting such an attack. Yet there are several new types of attacks possible in the biometrics domain. This may not apply if biometrics is used as a supervised authentication tool. But in remote, unattended applications, such as Web-based e-commerce applications, hackers may have the opportunity and enough time to make several attempts, or even physically violate the integrity of a remote client, before detection. (p. 615)


Ratha et al., (2001), continues to note that compromised biometric data is significantly harder to replace, compared to key-based or token-based data. If that token data is compromised, it is easy enough to cancel that token and assign a new one to the user, but in the case of biometric credentials, a user only has so many biometric features to be used. If facial data is compromised, a user doesn’t have many options available to them if facial credentials are the only ones used. (p. 615) As we work out these security issues, the safest course of action may be to require two-factor or even multi-factor authentication to unsure security for the users.

Social Concerns Edit

As advanced as facial recognition systems are, they are certainly not perfect yet. According to Akhtar and Rattani, (2017), facial recognition accuracy decreases for every demographic attribute present on a person’s face. These include gender, age, and racial heritage, which create unintentionally discriminatory systems. They go on to say,

Considerably more research is needed in analyzing faces acquired in unconstrained conditions. The lack of public databases containing metadata with multiple labels (for example, lifestyle, geography, and occupation) has further stymied efforts to address this problem. Crowdsourcing might be useful to collect ground truth for such large datasets. Information fusion from multimodality imaging sensors might also help, as would the design of a single special-features extractor that can be used both to estimate demographic attributes and to aid recognition. However, few researchers have studied the interrelationships of age, race, and gender, which is required for any solution that fuses demographic and visual attributes (such as a pointed nose) for face recognition and search engines.

To combat these unintentionally discriminatory systems, researchers absolutely must step up to the task of researching these interrelationships. Without that research, facial recognition software will be unusable to huge parts of the population.

References Edit

Akhtar, Z. & Rattani, A. (2017, April 26). A face in any form: New challenges and opportunities for face recognition technology. Computer, 50(4). Retrieved from http://ieeexplore.ieee.org.mutex.gmu.edu/document/7912246/

DuVal, Ashley. (n.d.). History of facial recognition software. [Blog Post]. Retrieved from http://forensicpsych.umwblogs.org/research/criminal-justice/face-recognition-software/

“Face recognition technology (FERET)” (2011, January 25): Retrieved from https://www.nist.gov/programs-projects/face-recognition-technology-feret

“Face recognition vendor test (FRVT) 2000” (2010, December 02): Retrieved from https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000

Knight, Will. (2017) Paying with your face. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/603494/10-breakthrough-technologies-2017-paying-with-your-face/

Ratha, N. K., Connell, J. H., & Bolle, R. M. (2001). Enhancing security and privacy in biometrics-based authentication systems. IBM Systems Journal, 40(3), 614-634. Retrieved from https://search-proquest-com.mutex.gmu.edu/docview/222418906?accountid=14541

Richards, N. M., & King, J. H. (2014, May 19). Big data ethics. Wake Forest Law Review, 49, 408-424. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2384174


Latest activityEdit