
The Global Data Privacy Office (GDPO) created these A.I. Data Privacy Principles to help Publicis Groupe agencies understand the risks and responsibilities connected to the processing of personal data in A.I. systems developed and/or used within the company. Taking these principles into account will help Publicis agencies comply with applicable data privacy laws.
The data privacy principles agencies must consider depend on the role they have in the A.I. lifecycle. In case the agency develops an A.I. system it needs to consider different rules and principles than when it only uses an A.I. system.
The applicable rules and principles also depend on the fact if Publicis is processing personal data in an A.I. system for its own purposes or if it does so on the explicit instructions of a client.
To differentiate between these different circumstances this document distinguishes four different roles an agency may have:
In case the agency acts in the capacity of a Controlling Provider it is responsible for complying with the data privacy principles for processing personal data during the development-, testing- and monitoring phase of the A.I. system.
In case the agency acts in the capacity of a Controlling Deployer it is responsible for complying with the data privacy principles when processing personal data during the use of the A.I. system. During the use of the system the Controlling Deployer must apply those principles during the vendor due diligence process and to both the personal data it puts into the A.I. system and the data that is generated by the system.
In case the agency develops or uses an A.I. system on behalf of and in accordance with the instruction of the client, respectively acting as a Processing Provider and Processing Deployer, it is not responsible for complying with the data privacy principles, the client is.
The A.I. Data Privacy Principles are: Legal Basis, Purpose Limitations, Accountability, Data Minimization, Accuracy, Fairness, Retention, Transparency/ Data Subjects’ Rights, Confidentiality and Security.
In recent years the use of Artificial Intelligence (“A.I.”) has grown exponentially. Within Publicis Groupe (“Publicis”) this is no different. Many A.I. systems our agencies use are processing personal data; for example, to train these systems or to generate new products or services.
The use of A.I. systems can have many benefits, but also has risks. To limit the risks from a data privacy perspective, it is crucial to adopt a series of A.I. Data Privacy Principles (“the Principles”).
These Principles help guarantee safe and compliant processing of personal data in the A.I. systems used and developed within Publicis.
The Principles aim to:
The Principles are applicable when A.I. systems process personal data. The Principles are meant to help Publicis agencies comply with data privacy rules. The Principles do not cover compliancy with any other area of law, like intellectual property, trade secrets, product safety, etc.
A.I. system - means a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different A.I. systems vary in their levels of autonomy and adaptiveness after deployment.
Deployer - means the entity under whose authority an A.I. system is used.
Input Data - means data provided to or directly acquired by an A.I. system based on which the system produces an output.
Output Data - means new data an A.I. system creates or synthesizes based on input data and the A.I.’s algorithm.
Personal Data - means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.
Provider - means entity that develops an A.I. system or that has an A.I. system developed and places that system on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.
In case there are questions about the Principles or legal support is required to ensure compliance with the Principles, please reach out to the Global Data Privacy Office.
The principles agencies must consider depend on the role they have in the A.I. lifecycle. In case the agency develops an A.I. system it needs to consider different rules than when it uses an A.I. system. It also depends on the fact if Publicis is processing personal data in an A.I. system for its own purposes or if it does so on the explicit instructions of a client.
We differentiate between four different scenarios:
The agency that develops an A.I. system for its own purposes is considered a Controlling Provider of an A.I. system. Controlling Providers are responsible for the processing of personal data in their A.I. system.
Controlling Providers of A.I. systems can process personal data for different purposes, like developing, training, testing, security, Input Data offering, legal and/or monitoring purposes (“Developer Activities”).
The agency acting as a Controlling Provider needs to consider the following Principles:
If consent is chosen as legal basis, the agency as Provider must ensure the A.I. system has the capability to respond properly to the withdrawal of data subjects’ consent. This requires an (automatic) review process of the withdrawal (data subject identification, extend of the withdrawal etc.) and the Personal Data of that data subject to (partially) be erased by the A.I. system following a legitimate withdrawal.
If legitimate interest is chosen, a legitimate interest assessment (LIA) needs to be conducted prior to the personal data being processed in the A.I. system. In this LIA the agency’s interest for conducting the Developers Activities needs to be weighed against the fundamental rights and freedoms and interests of the individuals whose data is being processed.
Prior to processing personal data in an A.I. system the agency acting as Controlling Provider needs to define, register, and communicate the specific purposes for each of the Developer Activities. The purposes need to be communicated to the data subjects in clear and plain language, ensuring the target audience understands why their data is being processed.
The agency also needs to align the functioning of the A.I. system with these purposes to avoid personal data being processed for any other purposes.
In case additional purposes for processing the personal data are identified, these need to be compatible and proportionate to the original purpose. It is essential to promptly inform data subjects about these new purposes and, if required by law, obtain consent from the data subject to use their personal data for the new purpose.
This approach ensures that we only process personal data for purposes that are transparent and well-defined, reflecting our commitment to responsible data processing.
Accountability means that the agency acting as Controlling Provider is responsible for compliance of the processing of personal data by their A.I. system with data privacy law and that the agency should be able to demonstrate such compliance. To help demonstrate this compliance the agency should:
The personal data processed during the Developers Activities must be appropriate, relevant, and limited to what is necessary for the defined purposes.
This does not mean to limit the data to the largest extent; rather, it means that the agency needs to consider the right amount of data for the defined purpose.
To properly minimize the personal data processing in the A.I. system the agency should:
The personal data processed by the A.I. system should be up to date and accurate.
In the design phase of the A.I. system the agency will have to ensure that the A.I. system will have a build-in mechanism for ensuring the personal data processed for the Developers Activities stays up to date and accurate. Part of this mechanism should be allowing data subjects access to their personal data in the A.I. system and the possibility for correcting that data on request.
A.I. systems often suffer from biases due to historical data, incomplete datasets, or poor governance models. Such biases can lead to direct or indirect discrimination. To mitigate this, biases should be identified and removed during the development-, training- and the after-market testing phase of the A.I. system.
The personal data processed in the A.I. system should not be processed longer than necessary for the defined purposes.
Therefore, it is essential to define the retention period for each specific purpose and clearly communicate the retention periods to the data subjects.
In the design phase of the A.I. system the agency should ensure that the A.I. system will have a build-in mechanism for ensuring the personal data processed for the Developers Activities will not be stored longer than necessary for the relevant purpose. The mechanism should also provide the option, where possible, to have the data deleted on request of the data subject.
Also, the agency should periodically review the data it holds, and erase or anonymise it when it no longer needs it.
Transparent processing is about being clear, open, and honest with data subjects from the start about who the agency acting as Controlling Provider is, and how and why its A.I. system processes their personal data and what rights the data subjects have.
This information should be concise, transparent, understandable, and easily accessible.
In practice, this transparency is usually provided in a Privacy Notice that should be offered to the data subject either before or at the moment their personal data is collected/ processed.
The agency must implement effective controls and mechanisms to ensure that everyone involved in the processing of personal data during the Developers Activities only have access on a need-to-know basis and respects the confidentiality of such data.
A.I. systems need to be robust, secure, and safe at every stage of their lifecycle. To achieve this, the agency acting as Controlling Provider should ensure the data security level of the A.I. system they develop is in line with Publicis’ Global Security Policies and industry standards.
The Global Security Office (GSO) should review the A.I. system from a data security perspective prior to its launch.
The agency that uses an A.I. system (either provided by Publicis or a vendor) for its own purposes is considered a Controlling Deployer of an A.I. system.
Controlling Deployers are responsible for the compliance of the personal data they determine to process in the A.I. systems they use.
The personal data processed during the use of an A.I. system can, for example, be part of Input Data as well as Output Data.
Ensuring compliance during the deployment- or use phase of the A.I. system has two components:
For the first component the agency acting as Controlling Deployer needs to assess whether the (use of an) A.I. system:
For the Second component the agency needs to assess whether the personal data it determines to process in the A.I. system:
The agency that develops an A.I. system on behalf of and in accordance with the instruction of the client, which A.I. system processes personal data, is considered a Processing Provider of an A.I. system.
The agency that uses an A.I. system on behalf of and in accordance with the instruction of the client, which use involves the processing of personal data, is considered a Processing Deployer of an A.I. system.
In both scenario’s mentioned above the agency is not the primary responsible party for the personal data processed in the A.I. system, the client is.
In case the agency is either a Processing Provider or a Processing Deployer it is crucial to enter into an agreement with the client. This agreement should clearly indicate that, among other: