Legal remedies against ‘deepfakery’

BY RICHARD - 1 July 2024

Essex Chambers podcast

Have you ever wondered how the law applies to AI-driven deepfakery? For a solid take on this from a UK law and practice perspective, I recommend you have a listen to 39 Essex Chambers’ latest ‘AI and the Law Podcast’ episode: ‘AI and the risk of deepfakery’. As noted in Spotify’s episode description, in “this episode David Mitchell speaks to Hanna Basha and Mark Jones of Payne Hicks Beach about tackling deepfakes on behalf of clients who are victims of different types of AI-generated deepfakes ranging from image based sexual abuse, commercial exploitation and political disinformation”. They’ve ‘been there and done that’, and they look at both civil law and criminal law remedies.

Are deepfakes harming people in New Zealand?

The short answer is yes. In New Zealand, deepfakes have already hit secondary schools, with the Secondary Principals Association stating: “There have been cases on both sides of the Tasman that target 50 to 60 young girls in the senior secondary space or indeed some staff so it’s certainly increasing in quantum” (see Deepfake bullying hits New Zealand schools). They have also targeted politicians, public health experts, and TV presenters.

Legal remedies

The type and availability of effective remedies against those who perpetrate deep fakes depends on the kind of deep fake it is and the context in which the deepfakery occurs. For example:

  • in the case of pornographic deep fakes, remedies may – depending on the circumstances – be available under the Harmful Digital Communications Act 2015, Privacy Act 2020, Harassment Act 1997, Copyright Act 1994 and/or the Crimes Act 1961, and through the torts of, for example, defamation, negligence, or infliction of emotional distress;
  • in the case of commercially exploitative deep fakes, remedies may – again depending on the circumstances – be available under the Fair Trading Act 1986, Copyright Act 1994, Crimes Act 1961 and/or through torts like defamation, passing off, or injurious/malicious falsehood; and
  • in the case of political disinformation, remedies might be available in defamation or possibly under the Harmful Digital Communications Act, the Crimes Act or, in the immediate proximity of an election, the Electoral Act 1993.

This is not to say that the availability of remedies will always be clearcut or necessarily effective. In some cases they will be, but in others it may be difficult to prove certain elements (such as ‘intention to cause harm to a victim’ required for an offence under section 22 of the Harmful Digital Communications Act) or in eradicating offensive content from the web.

As you’ll hear in the podcast episode if you listen to it, UK law is said to suffer from a gap in protection, in that the creation of a deepfake is not itself expressly prohibited. Much depends on what is done with a deepfake once created. With the availability and power of generative AI continuing to increase, it seems highly like that New Zealand will need to grapple with such issues, and probably sooner rather than later. And of course this is not only a legal issue. Victims of deep fakes can be traumatised, meaning effective responses may require not only the pursuit of legal remedies (including prompt takedown by social media platforms), but the provision of mental health services to victims to help them recover.

 

Sign up to our newsletter

Sign up to receive new blog posts and other updates in your inbox and be the first to know

Your personal information be handled in accordance with our privacy statement.

You may be interested in