Bill C-27: a first framework for artificial intelligence in Canada

Vincent Bergeron, Vanessa Deschênes & Tara D’aigle-Curley
Lawyers, patent and trademark agents

On June 16, 2022, the Federal Government introduced Bill C-27[1] , which enacts the Consumer Privacy Protection Act[2] (“CPPA“), the Personal Information Protection and Data Tribunal Act[3] and the Artificial Intelligence and Data Act (“AIDA”)[4]. This text only discusses AIDA, as we have covered the first two acts in a previous publication.

Section 1 – Aim of the Act, Applicability and Definitions


The use of artificial intelligence (“AI”) has expanded rapidly in recent years to all business sectors, prompting governments around the world to seek to regulate its development, use and commercialization, especially with respect to the privacy of consumers and the absence of bias against individuals subject to automated decisions.

Although Quebec’s Bill 25[5] imposes certain safeguards that may apply to AI, such as privacy impact assessments and the explicability of automated decision making, no specific AI legislation had previously been considered in Canada, either at the federal or provincial level. AIDA is therefore a first attempt in this regard.

Through AIDA, Parliament invokes the need to regulate the design, development, and use of AI systems, consistent with domestic and international standards, to protect individuals from potential harm[6] , drawing on its trade and commerce jurisdiction[7] . Unlike the CPPA, AIDA does not create new rights for individuals, but rather aims to prevent disadvantages or possible harm.  


AIDA applies to all “regulated activities” of private sector companies, namely (i) processing, or making available any data for the design, development or use of an AI system and/or (ii) designing, developing or making available an AI system or managing its operation; all to the extent that such activities are carried out in international or provincial exchanges[8] .

More simply, the law could, for example, apply in the following cases:

  • the sale of an AI system created in Ontario to a company in Quebec;
  • the use of an AI system, created in a foreign country, by a Quebec company (containing data from Quebecers or Canadians);
  • the design or development of an AI system in Quebec for an international client or another province.

The current drafting of AIDA suggests that it could potentially have extraterritorial reach, when international AI systems are used, developed, designed or managed in Canada.


Artificial intelligence system

AIDA defines an “artificial intelligence system” as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions[9] . This is a fairly broad definition, and could encompass some activities that one would not spontaneously call an AI system (e.g., some optimization systems process data to generate content, without naturally being called an AI system). It would not be surprising to see this definition evolve in parliamentary committee.

High-impact system

AIDA imposes stricter obligations on AI systems that qualify as a “high-impact system.” A regulation will come along to define exactly what a high-impact system is[10] .

Section 2 – Key Requirements

  1. Implement policies, practices and procedures:

Any person responsible for an AI system that engages in a regulated activity must develop policies and procedures when processing or making available anonymized data as part of that activity. The law mentions the likelihood that a regulation will clarify what measures must be in place to anonymize, use, or manage anonymized data[11] .

As such, it will be necessary for the responsible party to put in place measures to identify, assess, and mitigate the risks of harm or biased outcomes that may result from the use of the AI system[12] .

The qualification of harm in AIDA is based on the enumeration of prohibited grounds of discrimination in section 3 of the Canadian Human Rights Act[13] , which normally applies only to federal institutions and is broader than the enumeration of exclusionary grounds in the Charters. All AI systems subject to AIDA must therefore be free of bias or risk of harm to an individual on the following grounds[14] :

  • Race, national or ethnic origin, color, religion, age, sex, sexual orientation, gender identity or expression, marital status, familial status, genetic characteristics, pardoned status or disability, pregnancy and childbirth.

AIDA also requires organizations engaged in regulated activity to have policies, procedures and frameworks in place, related to the assessment and mitigation of risks caused by the system (see section B) and on the organization’s monitoring and control program (see section C).

  1. Conduct risk assessment and mitigation of the system:

Since the law is intended to prevent physical, moral, or property harm, the manager of an AI system must conduct an assessment to determine whether the system qualifies as a high-impact system[15] .

In this case, the person in charge shall:

  • Implement risk mitigation measures to identify, mitigate, and assess the risk of harm or biased outcomes;
  • Monitor compliance on an ongoing basis and maintain evidence of the effectiveness of the monitoring and controls performed;
  • Notify the Minister if the injury materializes or could materialize;
  • Publish a description of the AI system on its website:
    • Its use;
    • The content generated, the prediction or recommendation made by the system;
    • Mitigation measures implemented by the responsible party;
    • Any other information prescribed by regulation.
  1. Monitor system compliance on an ongoing basis:

The operator of a high-impact system has an obligation to establish measures to monitor compliance with the mitigation measures implemented and their effectiveness[16] .

In effect, this means setting up a program to monitor and control compliance with the law. In concrete terms, this means that the person in charge of an AI system will have to periodically verify, by means of objective tests and documented evidence, that the AI systems for which he is responsible do not generate biased results or harm to the persons concerned and that his organization complies with all formalities related to the law. Generally, these controls also allow an organization to verify that employees are adequately trained, that policies and procedures are written, updated and respected.

Any monitoring and control program should ideally lead to findings and/or recommendations, which should be communicated to those accountable for compliance in the organization.

  1. Be transparent:

In an effort to protect affected individuals from the risk of harm from a decision based on a biased outcome or high impact system, AIDA requires subject organizations to publish, on a publicly accessible website, a plain language description of any high-impact AI system made available by or under the management of the organization[17] .

In addition, the person responsible for the high impact system must notify the Minister, as soon as possible and in accordance with the regulations, if the use of the AI system results, or is likely to result, in significant harm[18] .

The concept of significant harm is not defined in the Act and will have to await the publication of regulations or the interpretation of the regulations by the Minister in charge (or the Artificial Intelligence and Data Commissioner, see section 3, below) or the courts.  

Section 3 – R&R, Power, and Sanctions

  1. Creation of the role of Commissioner for Artificial Intelligence and Data

By default, responsibility for enforcement of the law and the imposition of penalties for non-compliance is vested in a Minister designated by order-in-council[19] .

However, the Minister shall be given the authority to designate a senior official of the department for which the Minister presides as the Artificial Intelligence and Data Commissioner, to whom he or she may delegate his or her duties in whole or in part, except the power to make regulations[20] .

Overall, this Commissioner should be given similar responsibilities and powers to the Privacy Commissioner with respect to the enforcement and monitoring of AIDA.

It is interesting to note that the designated person responsible for the enforcement of AIDA will be able to pass on information regarding non-compliance to other enforcement authorities, depending on their responsibilities[21] :

  • The Privacy Commissioner;
  • The Canadian Human Rights Commission;
  • The Commissioner of Competition;
  • Canadian Radio-television and Telecommunications Commission;
  • The Privacy Commissioner or a provincial human rights commission (or its equivalent);
  • Any other person or entity designated by regulation.
  1. Power

The Minister or the Artificial Intelligence and Data Commissioner may, by order, require[22] :

  • the production of any records;
  • publishing information about an AI system on a website;
  • conducting an audit, or retaining an independent auditor to conduct an audit;
  • the implementation of any measures specified in an audit;
  • cease using or making available a high-impact system if there are reasonable grounds to believe that the use of the system poses a serious risk of imminent harm.

Any order may be enforceable if a certified copy is filed in the Federal Court. In practical terms, this means that the Minister or Commissioner has judge-like powers in making such an order.

  1. Sanctions

Violators of the Act may be subject to administrative monetary penalties prescribed by regulation or criminal penalties, depending on the seriousness of the violation[23] .

The most serious offenses could result in a monetary penalty of $25 million or 5% of the organization’s worldwide annual revenue, whichever is greater, or up to 5 years in prison.

Section 4 – Extraterritorial Scope

The drafting of AIDA suggests that the intent of the legislation is to give it extraterritorial reach, should international AI systems be used, developed, designed or managed in Canada. This is consistent with the European Parliament’s AI Regulation[24] . Companies operating in multiple jurisdictions should therefore consider establishing an international compliance program that considers both Canadian and European AI requirements.

If you have any questions, please do not hesitate to contact our Emerging Technologies Group.

[1] An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Short Title: The Digital Charter Implementation Act of 2022.

[2] An Act to facilitate and promote electronic commerce through the protection of personal information collected, used or disclosed in the course of commercial activities.

[3] An Act to establish the Personal Information and Data Protection Tribunal.

[4] An Act respecting artificial intelligence systems and the data used in such systems.

[5] An Act to modernize legislative provisions respecting the protection of personal information, SQ 2021, c 25

[6] Supra, note 1, Preamble.

[7] Constitution Act, 1867, s. 91(2), “regulation of traffic and commerce“.

[8] AI Act, s. 5(1)

[9] AI Act, Section 2

[10] AI Act, s 5(1).

[11] AI Act, s 6.

[12] AI Act, s 8.

[13] Canadian Human Rights Act, RCL 1985, c. H-6.

[14] Supra, note 10.

[15] AI Act, s 7.

[16] AI Act, s 10.

[17] AI Act, s 11.

[18] AI Act, s 12.

[19] AI Act, s 31.

[20] AI Act, s 33.

[21] AI Act, s 26.

[22] AI Act, ss 13-20.

[23] AI Act, s 29-30.