This paper describes development paths in both technology and deception, considers possible future developments, and discusses policy implications for governments, corporations, and private individuals. 

Outlook 

A number of key trends can be expected in the near and immediate future. The development and deployment of influence campaigns leveraging technology will accelerate still further, and machine learning algorithms enhancing profiles and interaction to build networks for commercial or malign purposes will go mainstream in a very short space of time. Meanwhile attitudes to deepfakes will remain confused and conflicted. Dramatic predictions of the consequences of their abuse for political purposes will continue, some justified and some overwrought. But in parallel, normalisation will also continue, driven by the increasing and widely accepted prevalence of virtual individuals, especially in marketing. One disturbing side-effect with unpredictable social consequences will be the continuing erosion of confidence in whether any online interaction is in fact with a real person. 

The race between creation and detection of deepfakes will continue, with each side enjoying temporary advantage. Apparent success in developing detection techniques will on occasion provide false confidence that the problem has been solved, based on faith in the apotropaic powers of neural networks to detect and counter the phenomenon they themselves begat. But the realisation will develop that deepfakes are like terrorism; impossible to eradicate or resolve altogether, the answer is to find ways of living with them as a perennial problem and mitigating the most damaging likely outcomes. In another parallel with combatting terrorism, in creating and countering deepfakes the moral asymmetry will continue to favour the malign actor, with none of the constraints of rule of law to hamper their agility and ingenuity in devising new means to exploit the technology to do harm. As such, deepfakes will in time form a key element of how cyber-enabled information strategies are used in future war and hybrid conflict.

Artificial systems will begin to assist or counter each other autonomously in real time. Automatic speaker verification systems have already been pitted against each other in simulation, as have sophisticated chatbots, in each case with disturbing results. The continuing proliferation of machine-learning systems for generating content will require similar decisions regarding keeping a human in the loop as those already under discussion in consideration of AI-driven or autonomous combat or weapons systems. Meanwhile apps and platforms will continue to present themselves as neutral, but an increasing number of them will be developed and used as tools of national competition, subversion, and espionage. 

Mainstream media awareness and popularisation of the term ‘deepfake’ will lead to definition creep, as precise and strictly bounded criteria for what can be termed a deepfake gives way to confusion in non-specialist discussion with simple edited audio, video, or still images. But ‘deepfake text’, in the form of algorithmically-generated messages flooding recipients to give a false impression of political consensus, will present a further evolution of the manipulation of public discourse that will be conflated with other machine-learning enhancements for malign influence campaigns. Of all forms of machine-enhanced deceptive content, text-based output is the first that will include interactions adjusted for the emotional state of the target, observing and analysing human responses and seeking the most effective method of influence through what is known in human-computer interface research as ‘emotional modelling’.

Even in the absence of dramatic involvement of deepfakes in causing political change or upheaval, long-term social implications may be profound. The more pervasive the present hype over deepfakes, the easier it becomes to claim that any legitimate information might in fact be doctored, with accusation and counter-accusation of fraud between disinformation spreaders and their debunkers. This problem is of course not limited to deepfakes themselves, as disinformation researcher Renee DiResta notes: “whether it’s AI, peculiar Amazon manipulation hacks, or fake political activism—these technological underpinnings [lead] to the increasing erosion of trust”. This points to a danger that user education in critical consumption of information may have an unintended consequence. If not managed carefully, emphasis on warnings that content online may be deceptive could contribute not only to this erosion of trust in genuine sources but also to the problem it seeks to address: the lack of belief in society at large in any form of objective truth.

Policy recommendations 

The challenge of information warfare is not a static situation, but a developing process. Adversary approaches evolve, develop, adapt, and identify successes to build on further. It follows that those nations and organisations that are preparing to counter currently visible threats and capabilities will find themselves out of date and taken by surprise by what happens next. Defences must instead be agile, alert to trends, and forward-thinking in how to parry potential future moves. 

The deepfakes arms race will be a contest of agility and innovation. While it progresses, there are practical mitigation steps that can be taken. Pending the introduction of adequate defences against voice mimicry, individuals and corporations can review the extent of publicly available audio 

recordings to assess whether a dataset is sufficient to generate fake voice interaction or authentication. Governments, NGOs, and think-tanks can adopt corporate attitudes on brand protection and compliance to increase awareness of who is purporting to represent them. Legal authorities can consider further whether and when deception carried out by means of deepfakes is, or should be, a criminal offence—and if so, which one. Social media platforms should continue to be challenged to address some of the most pernicious consequences of their laissez-faire attitude to hostile activity delivered across their networks. 

But the most powerful defence against the possible pernicious influence of deepfakes remains the same as against malign influence campaigns overall: awareness, and an appropriately developed and well-informed threat perception. Individuals up and down the chain of command of any and all organisations should be briefed on the potential impact of a deepfake-enhanced attack and, as an adjunct to cyber security awareness campaigns, education for the general public should include accessible explanations of the nature and implications of deepfake technology. Media organisations, especially national ones, should follow the example of Yle in Finland and produce their own demonstration deepfake videos, released under controlled circumstances, illustrating their potential to deceive in order to educate their audiences. In particular, individuals should be reminded of the basic principle that any personal image or information posted publicly online can become hostage to abuse for nefarious purposes. Katie Jones is a harbinger: it is in everyone’s interest to ensure that no-one is taken by surprise by her inevitable multitude of successors.