DEEPFAKES: EVERYTHING YOU HEAR OR SEE MAY NOT NECESSARILY BE REALITY

2024-05-16

Tara D’Aigle-Curley et Samantha Spector
ROBIC
LAWYERS, PATENT AND TRADEMARK AGENTS

If you have recently seen an image or video on the internet that looks and sounds real but there is just something a bit off about it, then you may be looking at a deepfake. Deepfakes are an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.[1] They are often found in the form of face swapping, general adversarial network-based deepfakes, lip-syncing and voice cloning.[2] Deepfakes have been around since the beginnings of the internet but have now grown exponentially in popularity and risks due to their increased accessibility to the general public. According to the World Economic Forum, online deepfakes increased by 90% between 2022 and 2023.[3] Technological advances in artificial intelligence have also allowed deepfakes to look, sound and behave more realistically. A 2024 deepfake is much more realistic than a 2018 deepfake.

The increased accessibility, popularity and accuracy of deepfake technology can greatly impact your business via fraud. We will provide examples. We will then discuss the lack of regulation on deepfake technology in Canada, Quebec and the United States and how you can shield yourself and your company from falling victim to deepfakes.

Fraud, Reputation Damage and Company Mistrust

Deepfakes can be used to perpetuate fraud within a company. Recently, an employee at a company based in Hong Kong received a call, seemingly from an executive at their company, asking them to transfer money into a specified bank account. The employee listened. Unfortunately, this call was a deepfake, crafted with publicly available videos and sound to mimic a high-profile company executive. As a result of this incident, $26 million dollars was instantly stolen from the company.[4] Similar scenarios have been anonymously reported in Québec. Unfortunately, these deepfake voices can be convincing conversationalists and carry on long conversations with unsuspecting victims.[5]

What’s more, companies can also be affected by deepfakes at job interviews. Indeed, the FBI warns that interviewees may not actually be the real candidates, but rather videos/images generated by artificial intelligence.[6]

Deepfakes can seriously damage a company’s reputation when ill-intentioned individuals publish fraudulent images of executives committing actions or making embarrassing or inappropriate statements. A company’s stock price and credibility can crumble in a second with a convincing and strategically disseminated deepfake.

From a security perspective, deepfakes can impact a company’s entire infrastructure. Confidential information can be compromised by insiders trying to make themselves undetectable through the use of deepfakes.[7]

The Legal World is Trying to Catch Up

Despite the long history of deepfakes, governments have been slow to catch up with the evolution of artificial intelligence technology.[8]

Even today, there are no federal laws in Canada specifically regulating deepfakes technology. The tabling of Bill C-27, whose aim is to regulate AI, is taking time to pass, and its implementation could take several months or even years.[9] Meanwhile, several provinces, including British Columbia, Prince Edward Island and New Brunswick, have passed legislation that allows individuals to take legal action to have explicit photos of themselves, real or altered, removed from the Internet.[10] At the time of writing, only the Court of Quebec has ruled on the legality of deepfakes, when a man was sentenced to prison for child pornography using this technology.[11]

The protection of privacy rights could, however, temporarily help to plug this legislative gap and enable citizens who have suffered harm as a result of the use of this technology to obtain some form of compensation. Indeed, under article 36 of the Civil Code of Quebec, the use of a person’s name, image, likeness or voice for purposes other than legitimate public information may constitute an invasion of privacy.[12] In addition, privacy laws in all provinces and at the federal level prohibit anyone from collecting, using or disclosing personal information without consent and for unacceptable purposes.[13]

Ultimately, in Canada, there is little or no legislation to sanction the use of deepfakes.[14]

At the other end of the spectrum, the US is working on AI and deepfakes legislation. Recently, Congress introduced the “No AI FRAUD Act”, which aims to legally define “likeness and voice rights”, which could lead to a ban on the use of AI-created deepfakes to represent a person in a non-consensual way. The Senate has proposed a similar act, the NO FAKES Act, which seeks to protect musicians from unauthorized artificial intelligence versions of their faces and voices.[15] Moreover, New York Governor Kathy Hochul signed a bill that bans the distribution of AI-created deepfakes which depicts nonconsensual sexual content.[16]

Don’t Be Fooled!

Bad actors will not wait for government regulations to come into place. They will continue to take advantage of sophisticated technology to achieve their aims.

Some software publishers have developed programs to help individuals and companies identify deepfakes.[17] These different software programs are tailored to the needs of different industries. For example, some programs rely more on analyzing specific components of a deepfake (mouth movements, speech patterns, etc.) while others specialize in detecting mass deepfakes on different platforms.[18]  In addition to these tools, it’s also essential to know how to decipher deepfakes from real images and videos.

Helpful resources to learn how to detect deepfakes include The Massachusetts Institute of Technology (MIT)’s Detect Fakes resources. This resource presents several tips to assist the general public in determining whether an image/video is a deepfake:[19]

  • Pay attention to the image/video’s face:
    • Is it too wrinkly or too smooth?
    • Does the face match the rest of the photo?
  • Pay attention to the image/video’s eyes:
    • Do shadows appear where they are supposed to?
    • If the subject is wearing glasses – Do they have a glare? Not enough of a glare? Do they move with the person as normal glasses would?
  • Pay attention to the image/video’s hair:
    • Does the facial hair look real?
  • Pay attention to the image/video’s facial moles or other skin irregularities;
  • Pay attention to image/video blinking normally or too much or not enough;
  • Pay attention to the image/video’s lip movements and if they look natural or not.[20]

It’s essential not to provide personal information, money – even modest amounts – or property to anyone whose identity you doubt. Deepfake fraudsters may insist on urgent action, but a top priority remains your safety.

As governments attempt to regulate the technology and spread of deepfakes, it is essential for business leaders to guard against any potential consequences associated with the use of deepfakes, to trust their instincts, and to know the essential strategies or tools to protect against them. It is also important that employees are made aware of these issues. Detecting deepfakes from the real thing can save your business from potentially disastrous consequences.


[1] Merriam Webster.
[2] https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/implications-of-deepfake-technologies-on-national-security.html
[3] https://incyber.org/en/article/blockchain-takes-on-deepfakes-ushering-in-an-era-of-digital-veracity/
[4] https://incyber.org/en/article/hong-kong-26-million-stolen-thanks-to-a-deepfake/
[5] https://www.lapresse.ca/affaires/techno/2023-12-13/fraudes-financieres/le-deepfake-debarque-au-quebec.php
[6] https://www.vice.com/en/article/akedaa/deepfakes-might-be-used-in-remote-job-interviews-fbi-warns
[7] https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/implications-of-deepfake-technologies-on-national-security.html
[8] https://www.vice.com/en/article/7kxkja/internal-emails-show-fbi-freaking-out-about-deepfakes
[9] https://www.cbc.ca/news/politics/ai-pioneer-canada-needs-law-to-regulate-ai-now-1.7105463
[10] https://www.cbc.ca/news/canada/british-columbia/deepfake-pornography-to-the-masses-1.7104326
[11] R. c. Larouche, 2023 QCCQ 1853.
[12] Code civil du Québec, c CCQ-1991 s 36.
[13] See, for exemple, Loi sur la protection des renseignements personnels dans le secteur privé, ch. P-39.1, s. 6 and 12.
[14] https://www.vice.com/en/article/5d9az5/congress-is-trying-to-stop-ai-nudes-and-deepfake-scams-because-celebrities-are-mad
[15] https://iapp.org/news/a/new-york-law-bans-explicit-deepfake-distribution/
[16] https://iapp.org/news/a/new-york-law-bans-explicit-deepfake-distribution/
[17] https://www.bbc.com/news/technology-53984114 ; https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/
[18] https://www.techopedia.com/best-ai-deepfake-detectors
[19] https://www.vice.com/en/article/akedaa/deepfakes-might-be-used-in-remote-job-interviews-fbi-warns
[20] https://www.media.mit.edu/projects/detect-fakes/overview/