CNIL Publishes a New Set of Guidelines on the Development of AI Systems
Time 2 Minute Read

On July 2, 2024, the French Data Protection Authority (the “CNIL”) published a new set of guidelines addressing the development of artificial intelligence (“AI”) systems from a data protection perspective (the “July AI Guidelines”). These follow the publication of an earlier set of guidelines addressing the same topic in June 2024. The July AI Guidelines will be subject to public consultation until September 1, 2024.

Similar to the first set of guidelines published by the CNIL, the July AI Guidelines are divided into seven “AI how-to sheets” in which the CNIL seeks to guide organizations through the necessary steps to develop AI systems in a manner compatible with the EU General Data Protection Regulation (“GDPR”). The “AI how-to sheets” provide guidance on: (1) legal basis for legitimate interest and development of AI systems; (2) legitimate interest: focus on open-sourcing models; (3) legitimate interest: focus on web scraping; (4) informing data subjects; (5) respecting and facilitating the exercise of data subjects’ rights; (6) annotating data; and (7) ensuring the safe development of an AI system.

Notably, the July AI Guidelines offer guidance to controllers on how to draft a legitimate interest assessment in the context of AI system development, including the risks and mitigation measures relevant in the collection and compilation of datasets, training of AI systems and use of AI systems. The CNIL also addresses the exercise of data subject rights at various stages of development (e.g., training), and the best practices and cases where controllers may rely on Article 11 of the GDPR as to not have to re-identify data subjects. Regarding web scraping, the CNIL seeks to guide organizations through measures and safeguards necessary to ensure that web scraping can lawfully be carried out based on legitimate interests and proposes a voluntary registry of organizations processing data collected through web scraping for AI development.

In addition to the July AI Guidelines, the CNIL also published a questionnaire seeking input from providers and users of AI systems, as well as other relevant organizations, to shed light on the conditions under which AI models can be considered anonymous. The CNIL will leverage the responses to this questionnaire to adapt its future recommendations to the relevant risks and how such risks can be reduced. 

Read the Guidelines and respond to the Questionnaire.


Subscribe Arrow

Recent Posts




Jump to Page