Eight Things You Didn’t Know about Facial Recognition

Deepfake Technology

Facial recognition is a technology that has become common in recent years for a variety of uses, some as naive as unlocking your iPhone or Tesla car, and some as controversial as for example used by police forces to arrest suspects.

1. How do facial recognition algorithms work?

Facial recognition algorithms match an image of a specific person to a watch-list of images uploaded to the database in advance. Safe and supervised systems have a clear policy on who can be on the list, for example of people against whom there is an arrest warrant or a restraining order. Unsupervised facial recognition systems may be provided with an illegally extracted image and identity database, for example from social networks.

2. What is the difference between police use and commercial facial recognition?

In the US several states and cities have restricted police use of facial recognition tech. The new proposal for legislation on AI in Europe also imposes specific restrictions on the use of law enforcement agencies. The first question that comes to mind – why ban police from using technology that is allowed to private companies? The key difference between the way police use facial recognition and the way commercial facial recognition products work is that: The police get a picture of a suspect from a crime scene and want to find out: “Who is the person in the picture?” That requires as wide a database as possible. Optimally – photos and identities o all the people in the world. Commercial facial recognition products such as those used by supermarkets, football stadiums or casinos answer different questions: “Is the person in the picture on the employees’ list? Is the person in the picture on a watch-list of known shoplifters?” To answer these questions doesn’t require a broad database but rather a defined list of employees or a watch-list o specific people against whom there is an arrest warrant or a restraining order. In other words, your image and identity are probably already in the databases of facial recognition apps used by the police but not in FR system used by your local supermarket.

3. Early versions of facial recognition algorithms suffered from gender and ethnic biases

As with other common technologies, facial recognition systems have suffered from biases toward women rather than whites. That is, the probability of a black man being misidentified is higher than the probability of a white man being misidentified. This is because the first algorithms were programmed by white men and therefore, were trained on databases that included mostly white men and were therefore inherently biased.

4. A public challenge to developers from around the world has proven – ethnic biases can be eliminated

In 2020, the Fair Facial Recognition Challenge was held at the European Conference on Computer Vision and Machine Learning. 150 teams from the AI industry and academia were invited to participate and test whether their algorithms are racially biased. Meaning: do they have the same chance of incorrectly identifying people of different races. The results proved that current AI-based facial recognition systems have dramatically improved in the last couple of years and can achieve unprecedented accuracy in these kinds of scenarios.

5. How did some facial recognition firms manage to eliminate racial bias?

We train our algorithms on a wide range of video and still images of people of different races, genders and ages. Thus, we were able to eliminate the biases that were common in previous versions of face recognition algorithms.

6. There is currently no federal legal framework for facial recognition in United States

During 2020 and in the wake of the BLM protest the use of facial recognition technology came up for widespread public discussion, including allegations of unethical and unfair use by law enforcement agencies as well as ethnic biases of the algorithms. In June 2020 Amazon, Microsoft and IBM announced that they were banning their face recognition systems for police use. There is currently no federal legal framework for the use of technology in the United States. State-level legislators and U.S. cities have independently enacted laws restricting the use of technology in public spaces. Just recently New York State has banned face recognition in schools and Massachusetts banned police use.

7. Covid has boosted the demand for facial recognition solutions

Covid-19 has boosted the demand for touchless access control and remote authentication solutions Until the advent of face recognition solutions, access control and presence systems required physical contact with surfaces. Since the outbreak of the corona plague we have seen a surge in demand for non-contact solutions that enable the identification of entry permits and the opening of non-contact doors. Another area that has gained momentum in the past year is remote authentication, for example when opening a remote bank account.

8. Recognition capabilities have not been compromised, even when people wear masks in public space.

The high level of accuracy of the algorithms is able to perform detection at a very high level of accuracy, even when people wear masks, because every human face is unique enough that 30% of the face surface will be visible for identification with a very high level of accuracy.

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.