The AI-powered generation of hyper-realistic fake media known as deepfakes has raised pressing concerns globally. The recent viral deepfake video of Indian actor Rashmika Mandanna highlights the urgent need for interventions to prevent misuse. In this article, we will dive deep into the world of deepfakes, their creation using AI, potential harms, policy responses needed, and measures for the responsible use of this technology going forward.
What are Deepfakes?
The term “deepfake” amalgamates “deep learning” and “fake” referring to manipulated media generated using advanced AI techniques. Deepfakes leverage neural networks to create doctored images, videos, audio and text that portray real people doing or saying fictional things in a highly realistic manner.
Some common types of deepfake content include:
- Fake celebrity porn videos made via face-swapping apps
- Political propaganda showing leaders making inflammatory remarks in synthesized speeches
- AI voice imitation used in deepfaked audio messages
- Fake textual content that mimics someone’s writing style
- AI generated profile photos used to create sham social media accounts
Thus deepfakes spin hyper-realistic disinformation customized to target individuals, leveraging AI to shatter public trust. But how exactly does the technology work?
How Are Deepfakes Created Using AI?
The two main AI techniques that enable deepfake creation are:
Generative Adversarial Networks (GANs)
GANs employ two competing neural networks – the generator that creates fakes, and the discriminator that identifies defects in them. Their interplay leads to increasingly realistic artificial media.

Autoencoders
Autoencoders compress input data into a latent representation and reconstruct it. Training autoencoders on image/video datasets of people provides them the features to generate deepfakes.
Besides massive training data, generating convincing deepfakes requires heavy computing power through GPUs. The open-source deepfake algorithms freely available today have democratized access to creation tools.
Rashmika Mandanna Deepfake Controversy
The potential harms of deepfake technology came to the fore when a realistic deepfake video of popular Indian actor Rashmika Mandanna went viral recently. Despite clarifications from the actor, the video spread rapidly across platforms like Twitter and YouTube. It was originally created by editing a video of British-Indian influencer Zara Patel who expressed distress over the misuse.

This incident highlights the ease of generating non-consensual intimate imagery of women using deepfakes. It has reignited calls for stronger legal protections and regulations against deepfake misuse. Mandanna also received support from celebrities and lawmakers pointing out the urgent need for deepfake governance.
Potential Risks and Challenges Posed by Deepfakes
Here are some major societal risks posed by malicious uses of deepfakes:
- Non-consensual fake porn videos causing reputational damage
- Disinformation campaigns using doctored footage of influential figures
- Fraud through voice imitation or face-swapped authorization
- Impersonation to evade facial recognition systems
- Blackmail/harassment using fake inappropriate imagery of targets
- Fomenting social instability by undermining evidence authenticity
Spotting deepfakes has become extremely tricky due to:
- Rapid improvements in AI-generation producing more realistic fakes
- Increased post-processing to minimize exposing artifacts
- Limited datasets biasing detectors to known manipulation traces
- Adversarial attacks that mislead detection algorithms
Thus deepfakes represent an asymmetry favoring malicious actors over defenders. Their potential weaponization necessitates urgent policy and technology interventions.
What Legislative Measures Are Required to Govern Deepfakes?
Regulating deepfakes poses complex challenges but doing nothing is not an option. Some policy measures that need to be debated include:
- Ban on non-consensual sexual deepfakes
- Requiring source and manipulation disclosures on synthetic media
- Extending likeness rights frameworks to cover deepfake abuses
- Enabling legal avenues for redress against reputation/privacy harms
- Incentivizing counter-deepfake technology research
- Platform accountability for enabling deepfake spread
- Flexible governance models adaptable to technological change
A nuanced approach balancing consent, free speech, proportionality and evolvability will be needed. But the threats posed by unregulated deepfakes warrant strong legal deterrents against harmful misuse.
How Can Major Platforms Help Mitigate Deepfakes Risk?
Social media platforms like Facebook, Twitter, Instagram and YouTube where deepfakes frequently spread, should undertake the following measures:
- Institute clear policies disallowing malicious deepfakes, non-consensual face-swaps etc.
- Make it easy for users to report deepfake content violations
- Leverage AI techniques to automatically flag policy-violating deepfakes
- Add warning labels on identified deepfake videos
- Collaborate with experts to fact-check suspicious viral media
- Promote public awareness about deepfake risks
- Fund research to advance deepfake detection capabilities
- Provide broad dataset access to researchers to accelerate counter-deepfake work
However, content moderation policies should respect civil liberties principles and avoid over-censorship. The processes must be accountable, transparent and uphold user rights.
Pathways for Responsible Advancement of Deepfake Technology
Like most cutting-edge technologies, deepfakes come with risks as well as benefits. Some considerations for the ethical advancement of deepfake AI include:
- Securing explicit consent for generating identifiable deepfakes
- Transparent sourcing of training data and manipulation disclosure
- Avoiding political, social and psychological manipulation
- Advancing assistive use-cases like speech prosthetics
- Developing open datasets to strengthen detection
- Promoting education and awareness about deepfake capabilities
With a collaborative approach balancing innovation and responsibility, deepfakes could potentially be steered to serve social good instead of harm.
Conclusion
The proliferation of deepfakes presents new challenges for truth and trust at the intersection of technology and society. But with coordinatedmulti-disciplinary efforts, encompassing ethics, governance and counter-technology research, this cutting-edge AI capability could hopefully be transformed into a trusted instrument of progress rather than regression. The choices we make today about regulating deepfakes will profoundly impact the emerging information order.