Content Warning: Image Based abuse
Increasingly sophisticated AI is leading to a rise of highly realistic ‘deepfake’ videos. This highlights the close relationship between technology and privacy and underscores the fundamental principle that individuals should be able to determine if, how, and when their information about them is used.
What are deepfakes?
‘Deepfakes’ are fake videos or images where a person in an existing image or video is replaced with someone else’s likeness. The technology hijacks people’s faces and while initial images may bear little resemblance to target, the AI process uses deep learning to continually develop the synthetic image into realistic depictions of targeted individuals. The technology that exists today enables creators to manipulate their target’s facial movements and recreate their expressions into believable digital forms.
Deepfakes rose to prominence when comedian Jordan Peel depicted former US president Barack Obama making several controversial comments. Peel was seeking to demonstrate the potential that this technology could have on political discourse and democratic processes.
While this is certainly a concerning development, the technology also presents a much more immediate and significant harm in its use in creating pornographic videos depicting a person without that person’s consent.
Growing nature of deepfakes
A 2019 study by Deeptrace found that 96% of all deepfake videos were pornographic and its volume has doubled in size. The overwhelming majority of this content targets female actors and musicians in the entertainment sector.
However, the technology for the production of deepfakes has become sufficiently accessible that it can be used to target anyone. In June 2020 Vox reported on this phenomenon detailed the story of an Australian law student who became a target of sexual predators, who stole images from her social media and superimposed them onto the body of porn actresses engaged in sexual acts.
Deepfakes are also being been commodified with emerging platforms providing tools and services making it easier for non-experts to create new content. More than 20 deepfake communities currently exist, with Deeptrace reporting that 13 of these communities have over 100,000 members. Dr Asher Flynn, associate professor of Criminology at Monash University, acknowledged that despite technology requiring manipulation at a high standard, the emergence of these sites is removing entry barriers and providing greater accessibility for new perpetrators. Deepfake dedicated sites currently host 94% of all deepfake pornography, a directory of pornstars best suited for celebrity deepfakes and enables users to pay for custom deepfakes. Apps have also emerged such as Deep Nude enabling users to ‘strip’ photos of clothed women in 30 seconds had over 95,000 users paying $50 in exchange for licensed, non-watermark versions of explicit content. These systems make anyone vulnerable to becoming a target to deepfake tools.
The importance of privacy and consent
This technology makes discerning between fact and fiction increasingly difficult. These videos directly attack a person’s ability to control if, how, and when their image or likeness is used, foregrounding the role of consent.
Unsurprisingly, deepfake technology is developing at such a rapid pace to the point where it is outperforming measures helping to mitigate and detect it.
In Australia, several laws have been enacted in response the growing number of image-based abuse crimes, recognising that individuals have the right to decide how their personal information is used and controlled.
Under the Privacy Act 1988 (Cth), anything considered “reasonably identifiable” of an individual including images of them is ‘personal information.’ However, the Privacy Act is limited to government entities and (generally) businesses with a turnover of $3 Million dollars (see here for more on the applicability of the Privacy Act). The limited protections of this legislation coupled with the borderless nature of deepfakes means that most perpetrators would not be subject to this regime.
The Enhancing Online Safety Act 2015 (Cth) that provides the eSafety Commissioner with a range of civil remedies to address the sharing of non-consensual, intimate images. Among other things, this enables the eSafety Commissioner to issue ‘take-down’ orders requiring removal images quickly.
In NSW, the Crimes Act 1900 makes it unlawful to make or distribute a deepfake – or threaten to do so – where the person depicted has not consented, carrying a sentence of up to 3 years’ imprisonment and/or a fine of $11,000. Defamation laws may also have a role to play.
Unfortunately, however, anonymity and geographical reach of the internet makes it incredibly difficult for law enforcement, regulatory bodies or affected individuals to effectively take effective action against perpetrators.
Staying safe online
As deepfake technology continues to improve into new mediums, privacy plays an important role in protecting personal content posted online. Bodies such as the Australian eSafety Commissioner are specifically established to support victims of image-based abuse, providing detailed advice and resources for anyone experiencing it, assisting in removing the material, and holding individuals accountable.
The deepfake phenomenon highlights the potentially for technology to be highly invasive. Privacy is all about controlling if, when and how information – including your image – is made available to the public. Deepfakes strike at the heart of this principle.