Game changing technologies have taken many forms. From communications, to geolocating tools, our modern technological boom has brought a range of extremely powerful digital tools into the purview of billions around the world.
But the latest paradigm shift has taken a strange form.
Deepfake technology as it has come to be known presents types of challenges never before faced by our societies by threatening our very ability to rely on information. In its most technical definition, deepfakes are synthetically crafted media which can portray the likeness of a known individual with incredible accuracy.
Deepfakes can be produced using a method known as generative adversarial networks (GANs), a type of machine learning in which two program models ‘challenge’ each other in a digital game of sorts. One model is fed a data set of media and then attempts to create forgeries. It then shows these forgeries to the other model which tries to distinguish the fake from the original. The forger creates fakes until the other computer model can’t detect the forgery. The larger the set of training data, the easier it is for the forger program to create a believable deepfake since it has more examples from which to hash out details of the subject. For this reason, videos of famous people, from presidents, to Hollywood celebrities were often used as ‘targets’ in the early, first generation of deepfakes — there’s a lot of publicly available video footage to train the forger.
Understanding how deepfakes are produced is important to appreciate just how effective they can be. Deepfakes are quite literally forgeries that can fool computers.
The implications of this technology are potentially immense.
Many have even pointed to the potential national security threats posed by deepfake tech. Florida senator Mark Rubio has been one prominent policymaker to sound the alarm. “In the old days,” said Rubio to an audience in Washington last year, “if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.”
Simply put, deepfakes can be used to spread highly damaging misinformation at a scale never before seen. Deepfake images and videos can be used to spread false scandals, promote the most outrageous of conspiracy theories, and destroy reputations. The potential is limitless.
The threat is only compounded considering the proliferation of deepfake technology. Today, ordinary users can download a variety of apps such as the now popular FakeApp and get started creating their very own deepfakes from their smartphones.
Detecting deepfakes even with assistance of technology, is not an easy task. Amateurish deepfakes generated by weak tools can often be spotted by the human eye. Others can be identified by most forensic programs. Machines rely on indicators like lack of eye blinking or shadows that are projected in an odd way. But GANs that generate deepfakes are getting better all the time and soon many fear existing technology will be insufficient to discern forgeries from the real thing. This concern was important enough for policymakers to task DARPA, the R&D wing of the Defense Department, has been funding digital anti-fraud programs to detect deepfakes. Millions have already been invested in these efforts. But despite ongoing work, many are skeptical there can be any full proof method to identify deepfakes. According to David Gunning, program manager in charge of the DARPA project, deepfake generators could in theory get around any detection method. “If you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” said Gunning. “We don’t know if there’s a limit. It’s unclear.”
What this means for combating the threat presented by deepfakes, is that a shift in strategy is what’s needed. Deepfakes is a powerful technology, and our ability to overcome it definitely is doubtful at best. Repressing the tools to produce deepfakes is also impossible as they have become a ubiquitous part of the digital sphere.
Fighting the technology is clearly not the correct path. Rather, fighting the actors behind deepfakes is.
Deepfakes become a serious problem only when used by extremists and rogue governments for their propaganda purposes, such as false flag ops and draw militant recruitment. And those must be the focus when building a response.
Already industry leaders have pointed to a network based solution to the problem of deepfake abuse. In this model, methods to spot the likely use of deepfakes would be developed and honed. Actors deploying deceptive methods could then be identified as such and their messages would then be deligitamized – perhaps even through friendly deepfake campaigns.
The momentum is indeed shifting. Combatting extremists online, Joel Zamel, a private intelligence and social media specialist, is wary of the power that terrorists wield on the internet. His firms specialize in information warfare, crowdsourced intelligence analysis and expert wargaming with a special focus on counter-extremist and counter disinformation campaigns – many of which deploy deepfakes to radicalize and incite populations. Zamel is one of a few specialists in the civilian sector that have developed proven methods against this type of deepfake extremism, and, as deepfake technology evolves, demand for his unique brand of services will soar. That said, it appears that governments have yet to fully grasp the importance of countering deepfakes and extremism online. In a recent interview, Zamel noted that, “governments should make broader and better use of these capabilities. They may be the key to defeating terrorists on the ideological level, as well as disincentivizing mass shootings and lone wolf attacks.”
This approach is in line with the current trends within intelligence and law enforcement agencies when it comes to combating extremism and other malicious actors online. The strategy in recent years has clearly shifted from a ‘shut it down’ towards a monitoring mentality. Instead of removing these deepfake media upon discovery (a move that will only promote the narrative being pushed) authorities and online platforms must call out the misuse for what it is, thereby undermining the actors. Ultimately, this will contribute to the most pressing concern in the online arena, namely countering the toxic narratives that are breeding extremism and militant violence.
The cat is clearly out of the bag. Deepfakes are, and will continue to be an increasingly large part of our digital landscape. Only through addressing deepfakes from this understanding, can we hope to turn this powerful technology from a danger to an asset.