Explainable Artificial Intelligence – A Transatlantic Perspective
Explainable Artificial Intelligence – A Transatlantic Perspective
Last month, IBM announced that they were going to stop any and all research relating to facial-recognition software citing the dangers of mass surveillance and possible human rights violations. This decision undoubtedly echoes the sometimes-flawed trust of users and the ever-growing movement within the industry calling for a better framework of the ethical issues raised by the use of artificial intelligence (“AI”).
The massive integration of AI in all sectors and aspect of our society is a testament to the fact that the issue of trust in AI, and more so, in the underlying decision-making processes, is necessary to better enable their use. It is with this thought in mind that the concept of explainability or explainable AI has progressed. The purpose of this concept is to establish methods, concepts and principles that would explain, in humanely comprehensive terms, the mechanisms underlying decision-making by AI (factors, importance ratio, data used, etc.).
With new privacy regulations coming into force, such as GDPR, and considering the complexity of AI processes, the right to an explanation raises significant discussions and debates among experts. As a relatively new concept, it is still seemingly difficult to implement it on a legislative level. Therefore, how are Canada and Europe facing the ethical dilemma surrounding AI, and more so, what is their stance on the right to an explanation?
1. Explainable AI in Canada
The Canadian legislative regime does not contain any law pertaining explicitly and clearly to the concept of explainable AI. Neither the domestic legislation nor the obligations arising from international agreements, such as the Canada-United States-Mexico Agreement (“CUSMA”) or the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (“CPTPP”), appear to impose in Canada some type of right to an explanation in AI.
However, some government initiatives have tried to start implementing this concept. To this end, the Canadian government introduced the Directive on Automated Decision-Making (“ DADM ”) to establish a framework regulating the use of automated decision systems by the federal institutions and their service providers. This directive requires the government and its service providers to provide to those affected by such decisions, a “meaningful” explanation and the reasons for it.
In fact, DADM respects the Guiding Principles in AI (“Principles”), created in 2018 following several consultations with experts, institutions and organizations, and more specifically, Principle No. 3, ensuring that the government will “provide meaningful explanations about AI decision-making, while also offering opportunities to review results and challenge these decisions”. In other words, for a lack of a more explicit legislation, it appears that the federal government is required to respect a certain type of right to an explanation when using AI-based decision-making systems.
But there is more. In May 2019, as part of Canada’s Digital Charter, the federal government proposed several different avenues to modernize the Personal Information Protection and Electronic Documents Act (“PIPEDA”) in its report Strengthening Privacy for the Digital Age : Proposals to modernize the Personal Information Protection and Electronic Documents Act. Within that report, the government comes to the conclusion that it would be relevant, among other things, to redefine the concept of transparency in PIPEDA so that it would require any person to be informed not only of the use of automated processes and the factors influencing the decision-making process, but also of its effect and the logic it followed.
Following this proposal, the Office of the Privacy Commissioner of Canada (“OPC”) set up a consultation, conclusions of which were delivered on March 13, 2020, entitled Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence. In this consultation, the OPC proposed to create a right to an explanation and improve transparency when interacting with automated processing. Because it deems the actual notion of transparency to be inadequate in regards to the ever-increasing complexity of AI systems, the OPC therefore wants to improve it by including a right to an explanation which would provide individuals with, not only the reasoning behind such processing, but also the consequences of it, in particular in regards to their rights .
As a result, if some suggests that it could be beneficial to apply concepts related to civil liability to AI, the state of law surrounding the right to an explanation in Canada, remains ambiguous and uncertain.
2. Explainable AI in Europe
A contrario, the issue surrounding the right to an explanation is the subject of a more polarized debate in Europe. This debate stems from whether the concept of explainable AI within the General Data Protection Regulation (“GDPR”) entails to provide information regarding the functionality of the system or the process leading to the decision. The conflict seems to focus on the required level of explainability expressed in the GDPR.
The GDPR does not contain sections related to the right to an explanation, be it Section 15 on the right of access or Section 22 on automated individual decision-making, only in Recital 71 is there a mention of such concept. However, since the recital is not part of the GDPR’s text, and therefore, does not have a binding force, it appears that the debate on the required level of information stems rather from Section 15(1)(h) GDPR, which allows the individual subjected to an automated decision-making, including profiling, to obtain “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”, within the limits set in Section 22(1) GDPR.
Apart from this particular debate, the European Commission took a clear stance on the issue surrounding the concept of explainable AI by publishing on April 8, 2019, the Ethics Guidelines for Trustworthy AI (“Guidelines”). As a non-binding framework, aimed at ensuring ethical AI for all actors working in the field, the Guidelines provide, as one of their principles, the principle of explicability. Similar to the Canadian standpoint, this allows for some transparency to shine through in the decision-making process both in terms of capacities and aims of the systems, which must be communicated, and it must also be possible to explain the decisions to the individual, either directly or indirectly.
Therefore, it appears that the state of law in Europe with respect to the true meaning of the concept of explicability is just as uncertain as the one in Canada.
Even if the legislation and the regulations surrounding the right to an explanation remain an open question in Canada and in Europe, it should be noted that several authors and industry members are calling on governments to clarify the situation, not only with respect to personal data protection, but also in regards to competition law, liability, contract law and even intellectual property rights. However, it seems that this long-awaited clarification is not as close as desired and that the lack of legal definition renders, according to the experts, the status of explicability somewhat uncertain>.
It is in the midst of all this uncertainty that the technical debate on the differences between the concept of AI interpretability and the concept of AI explicability sometimes reappears in an attempt to determine whether one system would be simpler, less costly or more relevant to implement than another. While AI explicability attempts to propose a model for understanding and explaining the internal mechanisms of a system in human terms, interpretability proposes instead the implementation of mechanisms to discern the mechanisms without understanding why, since the latter concept requires less technical knowledge.
All in all, this debate and the uncertainty of legislators regarding the implementation of these concepts in law illustrates one of the challenges that the AI industry must face and respond to quickly: how to gain and maintain the public’s trust in order to allow for an ever broader and more ethical implementation and integration.
© CIPS, 2020.
 Jules Gaudin is a Lawyer for ROBIC, LLP, a firm of Lawyers, Patent and Trademark Agents.
 Élisabeth Lesage-Bigras is an Articling Student for ROBIC, LLP, a firm of Lawyers, Patent and Trademark Agents.
 Ina Fried, « IBM is exiting the face recognition business », Axios.com, June 8, 2020.
 Shira OVIDE, « A case for Banning Facial Recognition », The New York Times, June 9, 2020.
 Alejandro Barredo Arrieta et al., « Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI », Information fusion, October 23, 2019, p. 2.
 Mark ROBBINS, « AI Explainability : Regulations and Responsibilities », Transports Canada, Innovation Center, September 2019, p. 3.
 DADM, art. 4, 4.1 and 5.1.
 DADM, art. 6.2.3.
 See Treasury Board of Canada Secretariat, « Ensuring responsible use of artificial intelligence to improve government services for Canadians », Communiqué de presse, March 4, 2019.
 Principle no3.
 L.C. 2000, c. 5.
 Innovation, Science and Economic Development Canada, Strengthening Privacy for the Digital Age : Proposals to modernize the Personal Information Protection and Electronic Documents Act, Part. 1 A. « Possible options ».
 Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, proposal no4.
 M. ROBBINS, supra., note ††, p. 3 and 4.
 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
 See on the subject, Sandra WATCHER, « Towards accountable AI in Europe? » The Alan Turing Institute, July 18, 2017.
 Janet WAGNER, « GDPR and Explainable AI », Zylotech.com, March 19, 2019; see also Andrew BURT, « Is there a ‘right to explanation’ for machine learning in the GDPR? », iapp.com, June 20, 2017.
 See on subject , Information and Communications Technology Council, « L’Ère de demain : la main-d’œuvre amplifiée par l’intelligence artificielle du Canada », ICTC, April 7, 2020, p. 48.
 Philipp HACKER et al., « Explainable AI under contract and tort law : legal incentives and technical challenges », Springer.com, 19 janvier 2020, p. 4.
 European Commission, Ethics Guidelines for Trustworthy AI, p. 5 and 12.
 Ibid., p. 12.
 Ibid., p.4; Adam GOLDENBURG and Michael SCHERMAN, « Automation not Domination : Legal and Regulatory Framework for AI », Canadian Chamber of Commerce, june 2019, p. 3,5 and 6, and M. ROBBINS, supra., note ††, p. 4.
 Danny TOBEY, « Explainability :where AI and liability meet », DLA Piper.com, February 25, 2019.