Facial recognition is a technology commonly used in artificial intelligence tools. Current situation and future perspectives in the process for regulating facial recognition technology in Europe and the US.
Don’t miss our content
SubscribeFacial recognition is a biometric technology used to detect and identify specific faces in an image by analyzing the individual’s features. Facial recognition technology (“FRT”) often uses artificial intelligence (“AI”) tools and can have very diverse purposes, ranging from the most common, i.e., identity verification or authentication, to remote identification for law enforcement or security purposes. As discussed in this blog, the use of AI and FRT is expanding rapidly, both for private actors and public authorities.
Despite FRT’s increasingly prominent role in our lives, there is still little regulation on it, particularly in Europe. In the European Union (“EU”), there are applicable provisions on biometric technologies, like data protection regulations. However, they apply incidentally and not directly. Spain also lacks specific regulation, but-at least in the field of data protection-the Spanish Data Protection Agency (“AEPD”) has been conservative about biometrics, and in 2020 it even published a technical note (discussed in this blog) including 14 misconceptions on the use of biometrics.
In the United States (“US”), there are certain provisions regulating facial recognition, but usually they are state or sector-specific provisions (some of which have been discussed in this blog), not a comprehensive regulation. Additionally, in the US, the National Institute of Standards and Technology (NIST) has developed its standards, which are an international reference in the field.
However, the exponential increase in the use of FRT is raising concerns in many sectors and groups, who consider that the uncontrolled use of these technologies can jeopardize fundamental rights, as stated in Resolution A/HRC/48/31 of the United Nations Human Rights Council.
In this context, it is necessary to pass legislation and implement appropriate regulations on FRT, for preventing abuses and mitigating risks. Currently, FRT poses four major challenges: (i) accuracy and proper functioning; (ii) data protection and privacy (since it often uses particularly sensitive data that can have a significant impact on individual rights and freedoms); (iii) possible biases (through the Deep Learning and AI on which it is based); and (iv) its possible use for mass surveillance and potential human rights violations.
To face these challenges, the European Commission (“EC”) recently published the new proposal for regulating facial recognition (the “Proposal for Regulating FRT”) as part of the EC’s Proposal for an EU Regulation on the legal framework applicable to AI systems, which we discussed here. The Proposal for Regulating FRT suggests classifying AI systems as: (i) prohibited AI systems; (ii) high-risk AI systems; (iii) low/medium-risk AI systems; and (iv) other AI systems; thus imposing different requirements and obligations depending on the specific use of this technology.
The regulation suggests that most of FRT’s possible uses for security purposes be prohibited or classified as high-risk, i.e., only allowed in a few specific cases subject to express authorization. It also proposes that biometric identification systems be classified as high-risk, so their use will be subject to various requirements, restrictions and control mechanisms.
The next step in the US could be a federal regulation, but unlike the EU so far it has no clear proposal for such a regulatory framework. However, note that in certain cases the Federal Trade Commission (“FTC”) has already acted against what it considers abusive uses of FRT. Also, in these cases, individuals’ consent had not been obtained. For example, in the Everalbum case, a photo app developer reached a settlement with the FTC to avoid penalties of up to $43,280 for every time it had not obtained the consumers’ express consent before using FRT.
Similarly, the AEPD considers that not obtaining the users’ express consent can qualify as a very serious infringement (article 6 of the GDPR), subject to penalties ranging between €300,001 to €20,000,000 or 4% of the annual turnover.
The future is uncertain, but the EU is expecting a comprehensive regulation on AI (including FRT) in the medium term, whereas in the US this matter is not such a priority on the legislative agenda. Therefore, as with the GDPR, the EU will once again be the global regulator, exercising the “Brussels Effect.”
Authors: Mateo García and Claudia Morgado
Don’t miss our content
Subscribe