
The word ‘deepfake’ has been thrown around a lot recently but what exactly is a deepfake?
Deepfake or deepfake technology can be seen as a tool capable of creating synthetic (fake) but realistic images and videos by manipulating or generating visual and audio content with Artificial Intelligence (AI). Slowly, this technology became accessible (at a price), and its potential to impact various aspects of society, from politics to personal privacy, grows significantly and has already affected certain fields.
What is a deepfake?
Think of a deepfake like a puppet show. In a puppet show, there’s a person or puppeteer making puppets look and do things like humans would. In deepfakes, instead of controlling puppets or dolls, it’s controlling and manipulating photos and videos where one can take someone’s face and make it do or say things they never did.
Just like one can make a puppet dance, deepfake uses AI to change the actions or appearance of someone to make it as if they are doing something else. It’s sort of placing a digital mask over someone’s face to turn them into someone sometimes for fun or tell a story. This is all good if there’s consent and the story being told in a deepfake is ‘nice’. But what would happen if the story is not nice and feelings got hurt and people misled by fake information?
How are deepfakes created?
Deepfakes are produced using advanced AI/Deep Learning techniques, precisely Generative Adversarial Networks (GANs). Just think of GANs as two AI algorithms that compete against each other: one creates the fake, and the other attempts to detect it. This ongoing battle accelerates with rapid improvements in the generated content to make it ‘feel’ more real. This is a simple way of looking of how deepfakes are made. But we won’t go further on how GANs create deepfakes but see the impacts of deepfakes.
Deepfakes and implications or “Seeing is.. not believing”
Deepfakes pose certain risks to personal privacy and consent. With the manipulative aspects of deepfakes, malicious use of deepfakes can undermine trust in media, distort public information and infringe on individual rights. The term ‘Seeing is believing’ is being challenged as it is becoming difficult to spot real or ‘truth’ from fake content.
You can visit https://detectfakes.kellogg.northwestern.edu/ to test if you can discern fake from real images.
Malicious use of deepfakes
We are going to go through some examples of deepfake used maliciously and hopefully at the end, we will see the increasing and urgent need for policies and good regulation for AI in Mauritius
- Non-consensual deepfake pornography
Deepfake has been used in creating explicit images and videos’ without the individual’s consent. Victims, like Noelle Martin. discovered manipulated pornographic content of themselves online and these were creating using their public photos using deepfake. This is a serious misuse representing violation of privacy and leading to psychological harm. - Democratic interference
Influencing public perception can influence electoral outcomes. Deepfake has been used in creating fake political advertisements and news. For example, one can alter images and videos to make a political candidate more appealing while deepfakes of politicians can be created to mislead voters and harm reputation. Deepfake can harm the democratic process and one alleged example is a deepfake audio in which a political figure was heard berating his staff. - Slow erosion in trust in media
Previously, one simple google search will give insight whether a specific content is fake or not. Nowadays, it’s increasingly hard to discern real from fake content as deepfake are becoming more sophisticated. An erosion in trust in media will lead to a skepticism of the content being displayed irrespective of the source. When people start to doubt if what they see on their screens is authentic or not, this can deepen social divisions. - Harassment and smear campaign
One (another) ethical concern is the use of deepfake to harass individuals. Deepfake has been used to silence and discredit journalists during a smear campaign.
Globally, the legal response to deepfakes is slowly developing and varied. Some countries have begun to adapt existing laws on fraud, defamation, and image rights to include deepfakes, while others have introduced specific legislation aimed at this new form of digital manipulation. In the US, individual states have passed laws criminalising the creation and distribution of non-consensual deepfake pornography. While some states in the US have laws against non-consensual deepfake pornography, many other places do not have specific regulations that adequately cover the range of potential abuses, leaving victims without recourse.
What does this all mean for Mauritius?
If we refer to malicious use of deepfake, the need for informed policy, robust legal measures and possible community engagement is clear and urgent. By addressing the challenges posed by deepfakes head-on, Mauritius can safeguard its digital and social fabric, ensuring that the benefits of AI are realised while its potential for harm is contained or minimised.
How do we enable this safeguard? To do so, Mauritius will require laws that explicitly ban the creation and distribution of malicious deepfakes, with clear definitions and penalties. This legislation should cover non-consensual deepfake pornography and fraudulent manipulation in political campaigns.
Although the development of such a legal framework is essential, a fight against deepfake will also require the help from local tech communities and public awareness campaigns. Local tech communities would need to create cooperation among technologists, legal experts and the civil society to monitor the landscape of digital content manipulation. Such a cooperation would benefit raising the awareness and dangers of deepfakes and possibly propose adaptive strategies.
At this point in the AI summer, I am more scared of deepfake than AI killing us.