Ariane Ohl-Berthiaume, Vincent Bergeron and Jules Gaudin

Year 2023 was significantly marked by advancements in the development and use of generative artificial intelligence (“AI”) systems, and 2024 is likely to be just as eventful. The Canadian Centre for Cybersecurity defines generative AI as “[…] a type of artificial intelligence that generates new content by modelling features of data from large datasets that were fed into the model.[1] The big difference with traditional AI models, being this ability to generate new content.

An increasing number of companies are incorporating generative AI into their products and services, to improve them and optimize efficiency. While these advancements are extremely beneficial for a wide range of organizations, the use of this technology also creates new risks, both for these companies and for their customers. It is therefore important to understand the legal implications associated with the use of this profoundly disruptive technology, to avoid problems, particularly in terms of intellectual property, copyright and privacy.

Among these risks, potential breaches of privacy deserve particular attention. With these concerns in mind, privacy authorities around the world are urging great caution before using these systems to process personal information. In fact, many laws apply to the use of generative AI, which obviously includes those aiming to protect personal information.

With this in mind, and to support organizations in their development and use of this technology, the Privacy Commissioners of Canada (the “Commissioners”) have developed Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies[2].

The Principles

The Commissioners have established 9 principles to ensure that developers and providers using generative AI are aware of the various privacy risks involved. These principles are important in guiding the development and use of generative AI in compliance with basic Canadian privacy requirements, thus avoiding unpleasant surprises in the months or years to come.

The organizations that implement them will quickly distinguish themselves from their competitors, in the face of customers who are increasingly aware of the importance of the responsible use of generative AI, and of the legal, financial and reputational risks that can arise from its reckless uses.

  1. Legal Authority and Consent

Before using personal information in the development or use of generative AI systems, we need to determine what allows us to legally collect and use personal information. In most situations, this will be the consent of the concerned individuals. We must then ensure that consent has been properly obtained, even if the personal information is collected by third parties. In this respect, it is recommended that internal processes for obtaining consent be reviewed to ensure their validity.

If personal information is collected by a third party, it is important to ensure that they obtain valid consents, notably by including provisions to this effect in agreements.

  1. Appropriate Purposes

The processing of personal information as part of the development and use of generative AI should only be carried out for appropriate purposes. For example, commissioning a system that enables discriminatory profiling, or using this system to re-identify anonymized personal information, are not appropriate uses, since they are contrary to applicable laws.

It is also possible for uses to become inappropriate while the system is in use. In such cases, we recommend that you discontinue use, or take steps to ensure that inappropriate use ceases.

  1. Necessity and Proportionality

This principle reiterates the importance of assessing necessity and proportionality before using personal information in generative AI systems. It is therefore recommended to assess whether privacy-enhancing technologies can be put in place, and to consider whether other information, such as anonymized information, can be used to limit potential breaches of privacy.

This analysis of necessity and proportionality is essential to ensure that if personal information is to be processed, it will benefit from adequate protection throughout its life cycle, from collection to destruction.

  1. Openness

The principle of openness is fundamental to the processing of personal information. In particular, it enables the person concerned to understand what personal information will be used by the organization, how it is collected, for what purposes, and the risks to his or her privacy.

When using generative AI, it is important to be able to inform individuals simply and clearly about how their personal information is processed, at all stages of the technology’s use, from collection to the end of its life cycle. The information generated by the system must also be clearly identified to enable the concerned individuals  to distinguish it.

In the process of developing technology that uses generative AI, it is advisable to clearly identify the objectives pursued by the system and the data used in its training. In addition, the risks of privacy breaches must be identified, and protection and security measures put in place to mitigate such risks, in particular by drawing on known practices in the field. Finally, the documentation resulting from the implementation, use and evolution of the system must be kept up to date for as long as the system in question handles personal information.

  1. Accountability

As mentioned above, the use of personal information in generative AI systems implies compliance with applicable privacy laws. Organizations wishing to develop or use these technologies are therefore responsible for ensuring that they comply with these legislative requirements and that they can demonstrate their compliance, particularly to privacy authorities who may request it.

Demonstrating this compliance involves, among other things, implementing privacy policies and practices within the organization, carrying out privacy impact assessments (PIA) where necessary, and putting in place a process to receive complaints.

As far as the development process is concerned, the principle of responsibility implies being able to explain how the generative AI system works, and to identify its potential vulnerabilities. A good practice for this is to carry out periodic external audits identifying these vulnerabilities, including potential biases, and recommending various measures to mitigate potential risks.

  1. Access to Personal Information

Individuals must be able to access their personal information. For this reason, organizations must put in place a process to respond to access requests in an efficient manner.

This process involves providing individuals with the opportunity to correct their personal information. This is especially important when a generative AI system relies on personal information to make a decision, since incorrect personal information will most likely lead to an inaccurate decision.

  1. Limiting Collection, Use and Disclosure of Information

The key to respecting this principle is to determine, at the earliest stages of a project involving generative AI, the purposes for which it is necessary to collect, use and disclose personal information. This ensures that the processing of personal information is genuinely necessary and justified. In this analysis, it should not be forgotten that the decisions or inferences made by the generative AI system may also contain personal information.

In addition, it is recommended that retention schedules be put in place to ensure that personal information is not retained beyond the period initially stipulated or used for other, secondary purposes. To this end, retention schedules should specify when personal information no longer needs to be retained, while ensuring that individuals can correct it promptly, particularly if a decision has been made about them.

  1. Accuracy

The principle of accuracy applies to personal information used in generative AI systems. This principle is essential to ensure that the decisions or recommendations made by the system are error-free and correspond to its intended purpose. Indeed, the use of inaccurate data can cause detrimental effects, such as the propagation of bias. For this reason, organizations must ensure that personal information used in generative AI systems is kept up to date, for example, by implementing a mandatory update process.

Ongoing evaluation of the results produced by the generative AI system should be carried out to check that the intended purposes are being met, and to take corrective actions if this is not the case.

  1. Safeguards

The final principle involves implementing safeguards to protect personal information throughout its life cycle. Safeguards should be proportionate to the sensitivity of the personal information handled.

Technical security measures should also be put in place, to protect against the most common attacks on generative AI systems, including injection attacks, model inversion and other attacks.

Finally, when a problem is detected, the organization must have the appropriate processes in place to limit the risk of harm that may result, correct it effectively and take the necessary measures to ensure that it does not recur.


The use of generative AI undoubtedly has many strategic business advantages for organizations of all sectors. It is becoming increasingly difficult for organizations to avoid using such tools to ensure that they remain competitive in their respective industries.

This growing interest, coupled with increasingly diversified and relevant offerings, has prompted the regulatory authorities in various countries to take an interest in the sector and assess the need for regulation. In this context, it is vital to ensure that errors are avoided right from the start.

When it comes to personal information and privacy, the principles described above are even more relevant as they reiterate mechanisms and concepts that are applicable even when an AI system is not in use.

To find out more, or if you’d like assistance with your strategy for the responsible use of generative AI, don’t hesitate to call on our multidisciplinary team in the Emerging Technologies Group at ROBIC.

[1] Government of Canada, Generative Artificial Intelligence – ITSAP.00.041, online: < >.

[2] Office of the Privacy Commissioner of Canada, Principles for responsible, trustworthy and privacy-protective generative AI technologies, online: < >.