The Dark Side of AI: How Deepfakes Are Weaponizing Personal Identities?

In January 2024, A deepfake video of Indian actress Rashmika Mandanna went viral on social media, causing widespread outrage. This [&hell

In January 2024, A deepfake video of Indian actress Rashmika Mandanna went viral on social media, causing widespread outrage. This event has once again raised alarms regarding the dangerous implications of AI-generated deepfakes in India and abroad. Numerous Bollywood celebrities, such as Katrina Kaif, Kajol, and Alia Bhatt, have fallen prey to similar manipulations. In response, the Indian government is advocating for more stringent regulations to address this escalating problem, highlighting the necessity for legal structures to curb the dissemination of misinformation and safeguard individuals against identity theft and digital harassment.

Generating Fraudulent Imagery

Deepfakes represent an increasingly sophisticated form of synthetic media that employs artificial intelligence (AI) to create hyper-realistic digital images illustrating real individuals saying or doing things they have never done or even depicting entirely fictitious people. These deepfakes are often misused for malicious intents, including the dissemination of false information. However, their rising usage also extends to evading facial recognition systems. Research from Idiap Research Institute indicates that 95% of these facial recognition platforms fail to identify deep fakes.

In contrast to traditional photos and screen recordings that are physically shown to a camera during a presentation attack, deepfakes are entirely digital creations generated through deep neural network-based machine learning. Moreover, these authentic-looking deepfakes can be “injected” into devices through hardware and software vulnerabilities without involving the camera, a technique known as an injection attack. This method makes it extremely difficult for them to detect through standard presentation attack measures.

Types of Facial Deepfakes

There are three main types of facial deepfakes highlighted by experts:

  • Face swapping is one of the most straightforward deepfake techniques, thanks to the availability of various ready-made solutions for generating facial deepfakes. Websites and applications specialising in face swapping can seamlessly “insert” the face of any real individual into a video.
  • Face synthesis, on the other hand, entails creating a highly realistic image or video of either an actual or fictional face, leveraging Generative Adversarial Networks (GAN). This method is quite intricate and necessitates considerable expertise from a potential attacker.
  • Altered facial expressions or conducting face manipulation can enhance the realism of a doctored image or video. Similar to face synthesis, face manipulation employs GAN technology, enabling various alterations to a face, such as modifying facial expressions, age, gender, and even hair and eye colour. By integrating a system of discriminators, generators, and domain labels, the network can effectively project a range of emotions onto a target’s face.

Technology Behind Deepfakes

The entire concept of Deepfakes revolves around the approach known as Generative Adversarial Networks (GANs). This innovative method was introduced by Ian Goodfellow and his team at the University of Montreal in 2014.

GANs are not merely a piece of technology; they represent a strategy for “generative modelling utilizing deep learning techniques.” The term “generative” highlights that GANs possess an intrinsic ability to create or generate their own data. For instance, when provided with countless images, GANs can produce a new image that resembles, yet differs from, the input images independently. This capability can also be applied to various media forms. However, a challenge arises in assessing the authenticity and acceptability of the “generated” output. To address this issue, the “Adversarial” component of GANs includes a discriminative network that verifies the generated data against authentic data. In simple terms, the generative network and the discriminative network function as adversaries, competing against one another.

Positive Possibilities of Altered Reality

Technology is meant to ease human living standards; however, often, these technologies are exploited to make things worse and eventually receive a bad name. Many organisations and businesses have yet to focus on the possible opportunities of generative technology. Technology is still in the nascent stage, and this is not limited to entertainment and digital media; it cuts across many domains such as advertising, marketing, etc.

The first benefit is obvious from the consumers’ perspective: They are offered customized, personalized, and enhanced images and videos.

Secondly, from the company’s point of view, the generative aspect of deepfakes can be utilised to automate time-consuming, human-centric tasks. For example, in the fashion industry, GANs can be used to blend real work with virtual artefacts. Celebrities, with their consent, could agree to use their images without doing a photoshoot. GAN technology can be used to personalise a website and its e-commerce segment, and it helps companies increase customer visits and sell through virtual celebrity endorsement.

  • One underexplored aspect is the application of Deepfakes beyond merely altering human faces and voices to encompass objects as well. Major advertising agencies and media corporations are actively involved in creating advertisements for various businesses, particularly those in the consumer goods sector that feature numerous product placements in their campaigns. Manually modifying products within ads to suit different regions is a labour-intensive task for designers. However, by leveraging Deepfakes technology, it is possible to tailor product placements and advertising initiatives for specific regions by integrating location-appropriate products into the advertisement content.
  • Deepfakes offers an innovative approach to localising video content. For instance, if the government initiates a campaign aimed at Indian farmers, it can utilise Deepfakes to modify the ethnicity of the actors, ensuring that the content remains free from ethnic bias. This technology can be utilised across various media aspects, including facial features and even audio, to tailor content for diverse ethnic groups.
  • Deepfakes can be extensively used in various advertising campaigns, e-learning resources, and other contexts tailored for diverse demographic segments and regions. They allow for the customisation of dialogue presentations to align with the audience’s preferences and are also used in film post-production.

Imminent Threats posed by Deepfakes

As we have discussed, deep fakes have powerful possibilities and are usable in industries. This technology has captured mass attention because of its rampant and illegal exploitation.

Misinformation and Deceptive Campaigns – Deepfakes have surpassed mere entertainment and publishing fake news stories; they have penetrated across online channels and are used as a weapon for reputation damage, companies’ credit ratings, and others. The adverse effects of hoax calls may not only be felt by businesses but trickle down to individuals’ images and behaviour and use them as spam callers to obtain personal information.

Political Manipulation – Among the most widely circulated examples of Deepfakes are the videos featuring Donald Trump declaring that “AIDS is over” and a “satirical” interview with Democratic Congressional candidate Alexandria Ocasio-Cortez, in which she bashfully shakes her head in response to a question about her grasp of political matters. These videos gained significant traction on social media. Such content has the potential to significantly influence public opinion regarding political figures, potentially inciting controversy and posing risks to the national security of countries facing various sensitive issues.

Cyber Blackmails—The risks posed by deep fakes to targeted individuals and celebrities who may be compelled to make payments online could become increasingly common in the future. This trend may parallel similar occurrences involving malware, spyware, and other cyber threats from recent years.

Related Posts

AI in education
Read Time5 min read
19 Dec 2024
By

How AI Enhances Communication and Collaboration Between Students and Educators

Communication and collaboration between students and educators have always been at the heart of effective learning. However, as education evolves, […]

Gartners technology trends
Read Time5 min read
04 Dec 2024
By

Gartner’s 2025 Strategic Technology Trends: Shaping the Future of Business and Innovation

As we approach 2025, the rapid pace of technological advancements continues to reshape industries worldwide. CIOs and IT leaders are […]

Why AI Hasn't Met ROI Expectations Addressing AI Readiness Pitfalls
Read Time5 min read
29 Nov 2024
By

Why AI Hasn’t Met ROI Expectations: Addressing AI Readiness Pitfalls

Artificial Intelligence (AI) has captivated the business world with its potential to drive growth, efficiency, and innovation. Yet, achieving strong […]

Lets work together
Do you have a project in mind?