34 C
Dhaka
Tuesday, March 19, 2024

Deepfakes: The Synthetic Media I want to believe

What Are Deepfakes?

A deepfake is a sort of “synthetic media,” which refers to material (such as images, audio, and video) that has been modified or generated entirely by artificial intelligence. The manipulation of media has always been easier and more accessible because to technological advancements (through tools like Photoshop and Instagram filters). Recent breakthroughs in AI, on the other hand, are set to take it even farther by giving machines the ability to create entirely synthetic media. This will have far-reaching consequences for how we create material, communicate, and comprehend the world. This technology is currently in its early stages, but in a few years, anyone with a smartphone will be able to create Hollywood-quality special   with little expertise or effort.

When used maliciously as disinformation, or when used as misinformation, a piece of synthetic media is called a “deepfake.”- Nina Schick

How Do They Work?

A deepfake’s algorithm superimposes the movements and words of one person onto those of another. Given two example videos of two people, an impersonator and a target, the algorithm creates a new synthetic video in which the targeted person moves and speaks in the same manner as the impersonator. The more video and audio examples from which the algorithm can learn, the more realistic its digital impersonations will be.

How Easy Are They To Make?

Until recently, only special effects experts could create convincingly realistic-looking and sounding fake videos. However, AI now enables non-experts to create fakes that many people mistake for the real thing. Although the deep-learning algorithms on which they rely are complex, there are user-friendly platforms on which people with little to no technical knowledge can create deepfakes. The simplest of these platforms allows anyone with internet access and images of a person’s face to create a deepfake.

How Can You Spot A Deepfake?

Deepfakes are notoriously difficult to detect. Because they lack obvious or consistent signatures, media forensic experts must sometimes rely on subtle cues that are difficult for deepfakes to imitate. Abnormalities in the subject’s breathing, pulse, or blinking are all telltale signs of a deepfake. A normal person, for example, blinks more frequently when they are speaking than when they are not. Subjects in authentic videos follow these patterns, whereas subjects in deepfakes do not.

  • Face — Is someone blinking too much or too little? Do their eyebrows fit their face? Is someone’s hair in the wrong spot? Does their skin look airbrushed or, conversely, are there too many wrinkles?
  • Audio — Does someone’s voice not match their appearance (ex. a heavyset man with a higher-pitched feminine voice).
  • Lighting — What sort of reflection, if any, are a person’s glasses giving under a light? (Deepfakes often fail to fully represent the natural physics of lighting.)

What Kinds Of Damage Could Deepfakes Cause In Global Markets Or International Affairs?

Deepfakes have the potential to incite political violence, sabotage elections, and disrupt diplomatic relations. For example, earlier this year, a Belgian political party shared a deepfake on Facebook that appeared to show US President Donald Trump criticizing Belgium’s stance on climate change. The crude video was easy to dismiss, but it elicited hundreds of online comments expressing outrage that the US president would meddle in Belgium’s internal affairs.

Deepfakes could also be used to humiliate and blackmail people or attack organizations by presenting false evidence that their leaders have acted badly, potentially causing stock prices to plummet. Deepfakes are an especially insidious form of deception because people are wired to believe what they see—a problem exacerbated by how quickly and easily social media platforms can spread unverified information.

Do Deepfakes Have Any Positive Applications?

They do, indeed. The ALS Association is one of the best examples. The ALS Association is an American nonprofit organization that funds global amyotrophic lateral sclerosis (ALS) research ,The association has partnered with Lyrebird to use voice-cloning technology, the same technology that underpins deepfakes, to assist people suffering from ALS (also known as Lou Gehrig’s disease). The project records people with ALS’s voices so that they can be digitally recreated in the future—a very useful application of technology.

Source: https://datasociety.net/library/deepfakes-and-cheap-fakes/

Foreign Policy Implications

Deep fakes’ arrival has frightening implications for foreign policy and national security. They have the potential to be powerful tools of covert action campaigns and other forms of disinformation used in international relations and military operations, with serious consequences. The 2017 information operation against Qatar, which attributed pro-Iranian views to Qatar’s emir, demonstrates how significant fraudulent content can be even in the absence of credible audio and video.

For example, a credible deep fake audio file purporting to be a recording of President Donald J. Trump speaking privately with Russian President Vladimir Putin during their most recent meeting in Helsinki, with Trump promising Putin that the US would not defend certain NATO allies in the event of Russian subversion. Other examples could include deep fake videos depicting an Israeli soldier committing atrocities against a Palestinian child, a European Commission official offering to end agricultural subsidies on the eve of a crucial trade negotiation, or a Rohingya leader advocating violence against Myanmar’s security forces.

Democracy may also suffer. The distribution of a plausible video clip depicting a candidate saying heinous things twenty-four hours before an election could have an impact on the outcome. Deep fakes, on the other hand, would allow for more effective disinformation operations, similar to Russia’s efforts against the 2016 US presidential election. As the technology spreads, a broader range of nonstate actors and individuals will be able to cause similar issues.

The Challenge of Limiting the Harms

There is no silver bullet solution to this problem, and there is no way to reverse the technological progress that allows for deep fakes. Worse, some of the most plausible responses come at a high price.

Ideally, technological solutions could adequately address this technology-driven problem. However, while strong detection algorithms (including Generative Adversarial Network (GAN) -based methods) are emerging, they lag behind the innovation found in the creation of deep fakes. Even if an effective detection method is developed, it will be difficult to have a widespread impact unless the major content distribution platforms, including traditional and social media, adopt it as a screening or filtering mechanism. The same is true for potential digital provenance solutions: video or audio content can be watermarked at the time of creation, producing immutable metadata that marks location, time, and place and attests that the material was not tampered with. To have a broad impact, digital provenance solutions must be built into all devices used to create content, and traditional and social media must incorporate those solutions into their screening and filtering systems. However, there is little reason to expect convergence on a common digital provenance standard, let alone that such technology would be adopted in those ways.

Another option would be for Congress to intervene with regulatory legislation requiring the use of such technology, but this approach would involve a level of market intervention unprecedented in the history of these platforms and devices. This option also runs the risk of stifling innovation due to the need to pick winners while technologies and standards evolve.

Legal and regulatory frameworks may play a role in mitigating the problem, but as with most technology-based solutions, they will struggle to have a broad impact, particularly in international relations.

Another option is to put pressure on traditional and social media platforms to do more to identify and suppress deep fakes, which is a familiar argument in today’s ongoing debate about disinformation and social media. Companies like Facebook are in a good position to prevent the widespread distribution of harmful content. In response to recent pressures from different authorities , Facebook and other platforms have expressed a strong desire to improve the quality of their filtering systems. Nonetheless, past performance suggests that such efforts should be approached with caution.

For a long time, social media platforms have been shielded from liability for disseminating harmful content. With a few exceptions, Section 230 of the Communications Decency Act of 1996 , USA broadly shields online service providers from harm caused by user-generated content. By limiting that immunity, Congress could provide platforms with stronger incentives to self-police. It could, for example, condition Section 230 immunity on whether a company made reasonable efforts to identify and remove falsified, harmful content at the upload stage or after receiving notification about it after it was posted. However, such a legislative effort would almost certainly face stiff opposition from businesses, as well as those who question whether such screening can be performed without the imposition of political or ideological bias.

Deep fakes do not always necessitate a large audience to have a negative impact. From the standpoint of national security and international relations, the most dangerous deep fakes may not be disseminated via social media channels. Instead, they could be delivered to specific audiences as part of a reputational sabotage strategy. This approach will be especially appealing to foreign intelligence services hoping to influence decisions made by people who do not have access to cutting-edge detection technology.

What Government  of different countries are Doing To Defend Against The Harm That Deepfakes Could Cause?

So far, the European Union (EU) has taken the most proactive steps to combat all forms of deliberate disinformation, including deepfakes.

Earlier this year, Brussels released a disinformation strategy that includes relevant guidelines for defending against deepfakes. Across all forms of disinformation, the guidelines emphasize the importance of public engagement in order for people to be able to tell where a piece of information came from, how it was produced, and whether it is trustworthy. The EU strategy also calls for the establishment of an independent European network of fact-checkers to assist in the investigation of the sources and processes of content creation.

Deepfakes have alarmed lawmakers from both parties and both chambers of Congress in the United States. Reps. Adam Schiff and Stephanie Murphy, as well as former Rep. Carlos Curbelo, recently wrote a letter to the director of national intelligence, requesting that he investigate how foreign governments, intelligence agencies, and individuals could use deepfakes to harm U.S. interests and how they could be stopped.

China is an intriguing case to follow. I haven’t seen any official statements or actions from the government expressing concern about deepfakes. However, Xinhua, China’s state-run news agency, recently experimented with using digitally generated anchors to deliver news.

What More Could Countries Do?

Countries could, for example, define inappropriate uses of deepfakes. Because deepfakes are used in a variety of contexts and for a variety of purposes (both good and bad), society must decide which uses are acceptable and which are not. This would assist social media companies in regulating their platforms for harmful content.

Governments, in particular, may make it easier for social media platforms to share information about deepfakes with one another, news, organizations, and nongovernmental watchdogs. A deepfake information sharing act, similar to the United States Cybersecurity Information Sharing Act of 2015, could allow platforms to alert each other to a malicious deepfake before it spreads to other platforms and alert news agencies before the deepfake makes it into the mainstream news cycle.

Governments must at the very least fund the development of media forensic techniques for detecting deepfakes. There is currently a race going on between automated techniques for creating deepfakes and forensic techniques for detecting them. To keep up with the pace of new deepfake algorithms, such investments must continue, if not increase.

How Urgent Is The Problem?

Deepfakes have not yet been used to incite violence or disrupt elections. However, the necessary technology is available. This means that countries are running out of time to protect themselves against the potential threats posed by deepfakes before they cause a disaster.

Conclusion

Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep-learning algorithms to synthesize video and audio content have made possible the production of “deep fakes”—highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce fake yet credible video and audio content will come within the reach of an ever-larger array of governments, nonstate actors, and individuals. As a result, the ability to advance lies using hyper realistic, fake evidence is poised for a great leap forward.

The array of potential harms that deep fakes could entail is stunning. A well-timed and thoughtfully scripted deep fake or series of deep fakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society.

Deep fakes are a huge problem for democratic governments and the global order. The Bangladesh should start by raising awareness of the problem in technical, governmental, and public circles, so that policymakers, the tech industry, academics, and individuals are aware of the destruction, manipulation, and exploitation that deep fake creators can inflict.

Ref:

Tawhidur Rahman

Related Articles

CLOUD COMPUTING SECURITY

Cloud Computing Security Issues, Threats and Controls

0
Cloud Computing and service models  The official NIST definition (NIST 800-145) of cloud computing says, “Cloud Computing is a model for enabling ubiquitous, convenient, on-demand...
API and Open Banking

API and Open Banking: Way for New Service Innovation for Banks and FinTech Companies

0
The people who gathered at a hall room of a city hotel in last month had one thing in common—they all are working in...
ISO 2001

ISO 27002: 2022 Implementation vs Reality

0
After almost a decade, ISO27001: 2013 is going to publish its new iteration of ISO27001:2022 in second (2nd) Quarter this year1. But prior to...
The power of API platforms

The power of API platforms brings the open banking promise into sharper focus

0
Open banking is a global phenomenon whose merits are felt in virtually every time zone, including those in the Asia-Pacific region. In contrast to...
Blockchains Gaming and Collusion

“Blockchains: Gaming and Collusion- A Reading in Political Economy”:  Futuristic Exploration with Fact-based Analysis

0
In this digital age, it has become quite common for us to constantly remain mesmerized by fascinating technologies.  However, deeper thoughts about those technologies,...
Scams with QR codes

Scams with QR codes: a fresh spin on an old tactic

0
QR codes have become a common sight in our everyday lives. Mainly because of touchless communication the usage of QR codes in various places...