Posts tagged Facial Recognition Technology.
Time 2 Minute Read

On September 3, 2024, the Dutch Data Protection Authority announced a €30.5 million fine against Clearview AI for the processing of personal data related to its biometric data database.

Time 1 Minute Read

On July 30, 2024, Texas AG Ken Paxton announced that Meta agreed to pay $1.4 billion to settle a lawsuit over allegations that Meta processed facial geometry data of Texas residents in violation of Texas law, including the Texas Capture or Use of Biometric Identifier Act (“CUBI”).

Time 2 Minute Read

On May 23, 2024, the European Data Protection Board adopted an Opinion on the use of facial recognition technologies by airport operators and airline companies to streamline the passenger flow at airports.

Time 3 Minute Read

On October 17, 2023, The First-tier Tribunal of the UK General Regulatory Chamber allowed an appeal by Clearview AI Inc. (“Clearview”) against an enforcement notice and fine issued by the UK’s Information Commissioner’s Office (“ICO”).

Time 2 Minute Read

On May 17, 2023, the European Data Protection Board (EDPB) adopted the final version of its Guidelines on facial recognition technologies in the area of law enforcement (the “Guidelines”). The Guidelines address lawmakers at the EU and EU Member State level, and law enforcement authorities and their officers implementing and using facial recognition technology. 

Time 1 Minute Read

On December 31, 2022, Baltimore’s ordinance banning the private sector’s use of facial recognition technology expired. The ordinance, which was enacted in 2021, banned private entities and individuals within the city limits from using facial recognition technology, including obtaining, retaining, accessing or using a “face surveillance system” or any information obtained from such system. The Baltimore ordinance followed a similar ban on the use of facial recognition technology by private sector companies in Portland, Oregon, enacted in 2020. New York City also passed an ordinance in 2021 regulating commercial establishments’ use of biometric technology.

Time 3 Minute Read

On October 17, 2022, the French Data Protection Authority (the “CNIL”) imposed a €20 million fine on Clearview AI for unlawful use of facial recognition technology. The fine was imposed after the CNIL’s prior formal notice remained unaddressed by Clearview AI.

Time 2 Minute Read

On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.

Time 3 Minute Read

On November 18, 2021, the European Data Protection Board (“EDPB”) released a statement on the Digital Services Package and Data Strategy (the “Statement”). The Digital Services Package and Data Strategy is a package composed of several legislative proposals, including the Digital Services Act (“DSA”), the Digital Markets Act (“DMA”), the Data Governance Act (“DGA”), the Regulation on a European approach for Artificial Intelligence (“AIR”) and the upcoming Data Act (expected to be presented shortly). The proposals aim to facilitate the further use and sharing of personal data between more public and private parties; support the use of specific technologies, such as Big Data and artificial intelligence (“AI”); and regulate online platforms and gatekeepers.

Time 2 Minute Read

On November 17, 2021, the Senate Committee on Commerce, Science, and Transportation held its confirmation hearing on FTC Commissioner nominee, Alvaro Bedoya.

Time 2 Minute Read

On November 2, 2021, Facebook parent Meta Platforms Inc. announced in a blog post that it will shut down its “Face Recognition” system in coming weeks as part of a company-wide move to limit the use of facial recognition in its products. The company cited the need to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.”

Time 1 Minute Read

On August 9, 2021, Baltimore joined Portland, Oregon and New York City in enacting a local ordinance regulating the private sector’s use of facial recognition technology. Baltimore’s ordinance will become effective on September 8, 2021. Read our earlier post for more details about Baltimore’s ban on the use of facial recognition technology by private entities and individuals within its city limits.

Time 1 Minute Read

On September 1, 2021, the South Korean Personal Information Protection Commission (“PIPC”) issued fines against Netflix and Facebook for violations of the Korean Personal Information Protection Act (“PIPA”).

Time 2 Minute Read

On July 27, 2021, the Spanish Data Protection Authority (the “AEPD”) imposed a €2,520,000 fine on Spanish supermarket chain Mercadona, S.A. for unlawful use of a facial recognition system.

Time 2 Minute Read

On June 14, 2021, the Baltimore City Council passed a bill that would ban the use of facial recognition technology by private entities and individuals within the city limits. If signed into law, Baltimore, Maryland would become the latest U.S. city to enact stringent regulations governing the use of facial recognition technology in the private sector.

Time 2 Minute Read

On April 23, 2021, the National Information Security Standardization Technical Committee of China published a draft standard (in Chinese) on Security Requirements of Facial Recognition Data (the “Standard”). The Standard, which is non-mandatory, details requirements for collecting, processing, sharing and transferring data used for facial recognition.

Time 4 Minute Read

On April 21, 2021, the European Commission (the “Commission”) published its Proposal for a Regulation on a European approach for Artificial Intelligence (the “Artificial Intelligence Act”). The Proposal follows a public consultation on the Commission’s white paper on AI published in February 2020. The Commission simultaneously proposed a new Machinery Regulation, designed to ensure the safe integration of AI systems into machinery.

Time 3 Minute Read

On January 11, 2021, the FTC announced that Everalbum, Inc. (“Everalbum”), developer of the “Ever” photo storage app, agreed to a settlement over allegations that the company deceived consumers about its use of facial recognition technology and its retention of the uploaded photos and videos of users who deactivated their accounts.

Time 2 Minute Read

On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.

Time 3 Minute Read

On September 9, 2020, Portland, Oregon became the first jurisdiction in the country to ban the private-sector use of facial recognition technology in public places within the city, including stores, restaurants and hotels. The city Ordinance was unanimously passed by the Portland City Council and will take effect on January 1, 2021. The City Council cited as rationale for the Ordinance documented instances of gender and racial bias in facial recognition technology, and the fact that marginalized communities have been subject to “over surveillance and [the] disparate and detrimental impact of the use of surveillance.”

Time 9 Minute Read

On August 11, 2020, the Court of Appeal of England and Wales overturned the High Court’s dismissal of a challenge to South Wales Police’s use of Automated Facial Recognition technology (“AFR”), finding that its use was unlawful and violated human rights.

Time 2 Minute Read

On August 4, 2020, Senators Jeff Merkley (OR) and Bernie Sanders (VT) introduced the National Biometric Information Privacy Act of 2020 (the “bill”). The bill would require companies to obtain individuals’ consent before collecting biometric data. Specifically, the bill would prohibit private companies from collecting biometric data—including eye scans, voiceprints, faceprints and fingerprints—without individuals’ written consent, and from profiting off of biometric data. The bill provides individuals and state attorneys general the ability to institute legal proceedings against entities for alleged violations of the act.

Time 1 Minute Read

Texas Attorney General Ken Paxton is investigating Facebook Inc. (“Facebook”) for alleged violations of the Texas Business and Commercial Code, which contains provisions governing the collection, retention and disclosure of biometric data. As we previously reported, Facebook recently reached a $650 million settlement for alleged violations of Illinois’ Biometric Information Privacy Act for their use of facial recognition software without permission from affected users.

Time 2 Minute Read

On June 18, 2020, Senator Sherrod Brown (OH) released a discussion draft of a privacy bill entitled the Data Accountability and Transparency Act of 2020 (“the Bill”). The Bill would provide individuals with several new rights regarding their personal data; implement rules limiting how personal data is collected, used or shared; and establish a new federal agency called the Data Accountability and Transparency Agency to protect individuals’ privacy and enforce those rules.

Time 5 Minute Read

On June 9, 2020, the French Data Protection Authority (the “CNIL”) published its Annual Activity Report for 2019 (the “Report”).

Time 3 Minute Read

On March 12, 2020, the Washington State Legislature passed SB 6280, which establishes safeguards for the use of facial recognition technology by state and local government agencies. Its stated goal is to allow the use of facial recognition services in ways that benefit society, but prohibit uses that put freedoms and civil liberties at risk.

Time 2 Minute Read

On March 10, 2020, the Vermont Attorney General filed a lawsuit against Clearview AI (“Clearview”), alleging that Clearview violated Vermont’s consumer protection law and data broker law. We previously reported on Vermont’s data broker law, which was the first data broker legislation in the U.S.

Time 11 Minute Read

Hunton’s Centre for Information Policy Leadership (“CIPL”) reports on the top privacy-related priorities for this year:

1.  Global Convergence and Interoperability between Privacy Regimes

Around the world, new privacy laws are coming into force and outdated laws continue to be updated: the EU General Data Protection Regulation (“GDPR”), Brazil’s Lei Geral de Proteção de Dados Pessoais (“LGPD”), Thailand’s Personal Data Protection Act, India’s and Indonesia’s proposed bills, California’s Consumer Privacy Act (“CCPA”), and the various efforts in the rest of the United States at the federal and state levels. This proliferation of privacy laws is bound to continue.

Time 2 Minute Read

On January 13, 2020, lawmakers in Washington state introduced a new version of the Washington Privacy Act, a comprehensive data privacy bill, in both the state Senate and House of Representatives. It would apply to companies conducting business in Washington or who provide products or services to Washington residents.

Time 2 Minute Read

On September 6, 2019, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the European Data Protection Board (the “EDPB”) on its draft guidelines on processing of personal data through video devices (the “Guidelines”). The Guidelines were adopted on July 10, 2019, for public consultation.

Time 5 Minute Read

On September 4, 2019, the High Court of England and Wales dismissed a challenge to South Wales Police’s use of Automated Facial Recognition technology (“AFR”). The Court determined that the police’s use of AFR had been necessary and proportionate to achieve their statutory obligations.

Time 2 Minute Read

On August 21, 2019, the Belgian Data Protection Authority (the “Belgian DPA”) published a press release informing of its intention to further investigate a data breach that was notified by Adecco Belgium, a temporary employment agency. The data breach affected thousands of biometric data, including fingerprints and images allowing facial recognition, and was suffered by the company Suprema. The compromised data included approximately 2,000 fingerprints of Adecco Belgium’s employees.

Time 1 Minute Read

On August 21, 2019, the Swedish Data Protection Authority (the “Swedish DPA”) imposed its first fine since the EU General Data Protection Regulation (“GDPR”) came into effect in May, 2018. The Swedish DPA fined a school 200,000 Swedish Kroner for creating a facial recognition program in violation of the GDPR.

Time 2 Minute Read

On August 15, 2019, the UK Information Commissioner’s Office (“ICO”) announced that it had launched an investigation into the use of live facial recognition technology at the King’s Cross development in London. This follows a letter sent by the mayor of London, Sadiq Khan, to the owner of the development inquiring as to whether the use of the software was legal. The company responsible for the technology said it was used for the purposes of public safety.

Time 3 Minute Read

On August 8, 2019, the United States Court of Appeals for the Ninth Circuit allowed a class action brought by Illinois residents to proceed against Facebook under the Illinois Biometric Information Privacy Act (“BIPA”) (740 ICLS 14/1, et seq.).

Time 2 Minute Read

On July 29, 2019, the UK Information Commissioner’s Office (“ICO”) announced the 10 projects that it has selected, out of 64 applicants, to participate in its sandbox. The sandbox, for which applications opened in April 2019, is designed to support organizations in developing innovative products and services with a clear public benefit. The ICO aims to assist the 10 organizations in ensuring that the risks associated with the projects’ use of personal data is mitigated. The selected participants cover a number of sectors, including travel, health, crime, housing and artificial intelligence.

Time 1 Minute Read

On June 14, 2019, the United States Court of Appeals for the Ninth Circuit affirmed summary judgment in favor of Facebook, holding that the company did not violate the Illinois Biometric Information Privacy Act (“BIPA”) (740 ICLS ¶¶ 15, 20).

Time 2 Minute Read

The Illinois Supreme Court ruled today that an allegation of “actual injury or adverse effect” is not required to establish standing to sue under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”). This post discusses the importance of the ruling to current and future BIPA litigation.

Time 3 Minute Read

As we previously reported in February 2017, an Illinois federal judge denied a motion to dismiss two complaints brought under the Illinois Biometric Information Privacy Act, 740 ILCS 14 (“BIPA”) by individuals who alleged that Google captured, without plaintiff’s consent, biometric data from facial scans of images that were uploaded onto Google Photos. The cases subsequently were consolidated, and on December 29, 2018, the Northern District of Illinois dismissed the case on standing grounds, finding that despite the existence of statutory standing under BIPA, neither plaintiff had claimed any injury that would support Article III standing.

Time 4 Minute Read

Recent judicial interpretations of the Illinois Biometric Information Privacy Act (“BIPA”), 740 ILCS 14, present potential litigation risks for retailers who employ biometric-capture technology, such as facial recognition, retina scan or fingerprint software. Federal judges in various district courts have allowed BIPA cases to move forward against companies such as Facebook, Google and Shutterfly, and retailers who use biometric data for security, loss prevention or marketing purposes may also become litigation targets as federal judges decline to narrow the statute’s applicability and additional states consider passing copycat statutes.

Time 5 Minute Read

The Article 29 Working Party (“Working Party”) recently issued its Opinion on data processing at work (the “Opinion”). The Opinion, which complements the Working Party’s previous Opinion 08/2001 on the processing of personal data in the employment context and Working document on the surveillance of electronic communications in the workplace, seeks to provide guidance on balancing employee privacy expectations in the workplace with employers’ legitimate interests in processing employee data. The Opinion is applicable to all types of employees and not just those under an employment contract (e.g., freelancers).

Time 5 Minute Read

On May 16, 2017, the Governor of the State of Washington, Jay Inslee, signed into law House Bill 1493 (“H.B. 1493”), which sets forth requirements for businesses who collect and use biometric identifiers for commercial purposes. The law will become effective on July 23, 2017. With the enactment of H.B. 1493, Washington becomes the third state to pass legislation regulating the commercial use of biometric identifiers. Previously, both Illinois and Texas enacted the Illinois Biometric Information Privacy Act (740 ILCS 14) (“BIPA”) and the Texas Statute on the Capture or Use of Biometric Identifier (Tex. Bus. & Com. Code Ann. §503.001), respectively.

Time 2 Minute Read

On June 15, 2016, the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced that its multistakeholder process to develop a code of conduct regarding the commercial use of facial recognition technology had concluded with the group reaching a consensus on a best practices document. As we previously reported, the NTIA announced the multistakeholder process in December 2013 in response to the White House’s February 2012 privacy framework, which directed the NTIA to oversee the development of codes of conduct that specify how the Consumer Privacy Bill of Rights applies in specific business contexts.

Time 2 Minute Read

On June 16, 2015, the Consumer Federation of America announced in a joint statement with other privacy advocacy groups that they would no longer participate in the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) multistakeholder process to develop a code of conduct regarding the commercial use of facial recognition technology. The letter was signed by the Center for Democracy & Technology, the Center for Digital Democracy, the Consumer Federation of America, Common Sense Media, the Electronic Frontier Foundation, the American Civil Liberties Union, Consumer Action, Consumer Watchdog and the Center on Privacy & Technology at Georgetown University Law Center. This decision comes after 16 months of meetings and negotiations. In its announcement, the group highlighted its inability to come to an agreement with industry groups on how the issue of consumer consent would be addressed in a code of conduct regarding the use of facial recognition technology. Specifically, the disagreement between consumer and industry groups revolved around the default rule for consumer consent (i.e., whether the default should be opt-in or opt-out consent).

Time 4 Minute Read

On March 28, 2014, the 87th Conference of the German Data Protection Commissioners concluded in Hamburg. This biannual conference provides a private forum for the 17 German state data protection authorities (“DPAs”) and the Federal Commissioner for Data Protection and Freedom of Information, Andrea Voßhoff, to share their views on current issues, discuss relevant cases and adopt Resolutions aimed at harmonizing how data protection law is applied across Germany.

Time 1 Minute Read

On December 3, 2013, the U.S. Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) announced a new multistakeholder process to develop a code of conduct regarding the commercial use of facial recognition technology. The first meeting is set for February 6, 2014 in Washington, D.C., and will provide stakeholders with background on the privacy issues associated with facial recognition technology, including how facial recognition technology currently is being used by businesses and how it may be used in the near future. The February meeting is open to all interested stakeholders and will be available for viewing via webcast. Additional meetings are planned for the spring and summer of 2014.

Time 3 Minute Read

On October 22, 2012, the Federal Trade Commission released a report entitled “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies.” The report focuses on privacy concerns associated with facial recognition technology, which is becoming increasingly ubiquitous across a variety of commercial applications ranging from search engines to video games to password authentication.

Time 3 Minute Read

On March 22, 2012, the Article 29 Working Party (the “Working Party”), adopted an Opinion analyzing the privacy and data protection law framework applicable to the use of facial recognition technology in online and mobile services, such as social networks and smartphones. The Working Party defines facial recognition as the “automatic processing of digital images which contain the faces of individuals for the purpose of identification, authentication/verification or categorization of those individuals.”

Time 2 Minute Read

On December 23, 2011, the Federal Trade Commission announced that it is seeking public comments on the privacy and security implications raised by the use of facial recognition technology. The FTC recently held a public workshop entitled “Face Facts: A Forum on Facial Recognition Technology,” that discussed the current and future commercial applications of facial recognition technologies and the associated privacy and security concerns.

Time 2 Minute Read

Recent developments involving the use of facial recognition technology have raised privacy concerns in the United States, Europe and Canada.  As we reported earlier this month, the Electronic Privacy Information Center (“EPIC”) and several other consumer privacy advocacy groups filed a complaint with the Federal Trade Commission against Facebook for its use of facial recognition technology.  According to EPIC’s complaint, Facebook’s Tag Suggestions feature recognizes individuals’ faces based on photographs already on Facebook, then suggests that users “confirm Facebook’s identification of facial images in user photos” when they upload new photos to their Facebook profiles.

Time 2 Minute Read

On June 10, 2011, the Electronic Privacy Information Center (“EPIC”) filed a complaint with the Federal Trade Commission, claiming that Facebook’s facial recognition and automated online image identification features harm consumers and constitute “unfair and deceptive acts and practices.” According to a post on The Facebook Blog, the Tag Suggestions feature matches uploaded “new photos to other photos [the user is] tagged in.”  Facebook then “[groups] similar photos together and, whenever possible, suggest[s] the name of the friend in the photos.”  On June 13, 2011, Congressman Edward Markey (D-MA) released a statement supporting the complaint and indicating that he will “continue to closely monitor this issue.”

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page