Early last year, a series of videos surfaced online that appeared to feature acclaimed Hollywood actress Margot Robbie engaging in acts ranging from handling her husband’s belongings, dancing with a bottle of wine to posing for photos. Upon coming across viewer comments, which majorly comprised complements, it was clear that people were not in the slightest aware that the entity starring in the videos was actually a deepfake. While eagle-eyed viewers may be able to notice certain peculiarities, such as the actress’s eyes almost always forward and seldom blinking, most viewers would be unwary of the falseness of the media. Deepfakes are synthetic media that usually involve the head or body of a person being stitched either on the body of another person or on a different scenic background.
Deepfake technology uses deep learning artificial intelligence to analyze real images or videos of the target person and subsequently producing a warped reality by generating content that never actually happened. While deepfakes have often been used for entertainment or artistic purposes, such as creating realistic special effects or impersonations in films or videos, the potential harm caused by these manipulated media has raised concerns about privacy, security, and the authenticity of information in the digital age. For instance, deepfakes have been deployed for the purposes of revenge porn, spreading fake news, defaming individuals, and spreading propaganda.
The potential impact of deepakes on insurers
‘This X does not exist’ is a website that has sprung up in popularity over the past few years. The website uses Generative Adversarial Networks (GAN) to generate realistic versions of almost anything and everything. From people and animals, to rental houses, emotions and even resumes, the website uses deepfake technology to analyze thousands of pictures of real people and decipher common angles, thereby leading to the creation of an artificial entity and a warped reality. While cases pertaining to this are yet to come to light on regular occasions, it is by no means a stretch to employ the same technology used for producing synthetic media, to generating fake home damage or accident imagery. Such imagery could well be used for fraudulently claiming from insurance companies.
One of the pre-requisites of an insurance contract is that there must be a certain degree of loss caused to the insured, and the claim payable is proportionate to the extent of the loss caused. However, in the case of a deepfake, the accident is a mere mirage with no actual loss caused.
This is primarily why deepfake technology is a looming threat to the insurance industry. Deepfake technology could be employed not only to file fraudulent claims, but also to generate non-existent assets or alter the existing conditions of assets and create fraudulent inspection reports. For instance, consider the destruction caused by a tornado or hurricane, an insured peril under the insurance policy. In this regard, deepfake technology could be used to generate a fake imagery of incidental losses, which may involve damage to objects which, in turn, may or may not exist. Further, it is extremely hard to detect a deepfake, even for an expert. It could therefore be concluded that with the advent of such technology, insurance companies may have not been in a more vulnerable state to potential frauds.
Cybercrime case involving deepfake voice: A significant reminder to the insurance industry
In 2019, as per an article published on the Wall Street Journal, in a first of its kind case, deepfake voice was employed to carry out a fraud worth $ 243,000. As per the article, the incident involved the CEO of a UK-based energy firm, who believed that he was conversing with the chief executive of the firm’s German-based parent company. The CEO was instructed to transfer funds to a Hungarian supplier. Further, according to the insurance firm of the company, the CEO was told to transfer the funds on an urgent basis. It later came into light that the caller was actually made by a fraudster, who used deepfake technology to mimic the voice of the German chief executive. According to the CEO of the UK-Based energy firm, the caller’s voice carried a peculiar melody and German accent, on account of which he failed to detect the fraud. While phone scams have been prevalent for a long time now, the victim usually isn’t an accomplished CEO. Further, Euler Hermes, the insurance company of the victim that covered the entirety of the loss caused, had not dealt with claims arising out of losses caused due to A.I-induced acts.
This incident, although not directly involving the employment of a deepfake scam to claim from the insurance company, significantly headlined the vulnerability of the insurance industry to such technology and thereby came up as an invaluable reminder for companies to employ appropriate technology to detect such scams.
Shallow fakes: A lesser-known and upcoming phenomenon
While the concept and prevalence of deepfakes is relatively well known in the present day, a lesser known phenomenon, going by the name of ‘shallowfake’ has emerged over the recent past. The primary difference between a deepfake and a shallowfake is that while the former involves the use of algorithmic systems and machine learning technology, the latter involves basic photo editing software to produce deepfakes. The use of simple photo editing software for the creation of a shallowfake both serves and diminishes the purpose of insurance companies. While the term ‘shallow’ clearly indicates a lesser-threatening nature as compared to its deepfake counterpart, the non-use of deep learning artificial intelligence allows a shallowfake to be deployed easier and faster. It could therefore likely that the insurance industry is subjected to shallowfakes on a more regular basis.
Such basic editing software may be deployed to produce false identity or address proof, or furnish additional evidence in favor of the subject claim or transaction, such as expert reports, contracts or agreements or invoices for services. While there have several moves made towards the prevention of shallowfake frauds, the world has been witnessing a rapid growth of touchless automation, particularly on account of the Covid-19 pandemic. While touchless automation may have its merits, it has increased the significance of customer-supplied photos, which are sent to insurers to settle claims, thereby throwing the doors wide open for shallowfake frauds.
Potential solutions for insurers to counteract deepfake technology
There are usually a couple of options that crop up before an insurance company, once they receive media captured from an untrusted application. It may either ask the insured to recapture the photos or videos, or may accept the media and perform an authenticity check. It is however the case that unless the insured possesses a record of insurance fraud, it is unlikely that the insurance company would carry out an in-depth analysis of the media. Moreover, with deepfake technology witnessing rapid advancements every day, it is becoming harder and more taxing for entities to detect such frauds.
Fortunately, not all is doom and gloom. With the rapid growth in touchless automation, there have also been significant advances made to designing software empowered to detect deepfakes. Researchers are constantly engaged in developing powerful A.I-tools that can detect the authenticity of a particular image or video. Such tools are basically bots which have been administered a vast amount of data and routinely trained to identify certain set anomalies and patterns that may involve a deepfake. Such techniques may range from analyzing the consistency of light and shadows across the image, to the detection of inconsistencies in damage patterns, to the blood flow in an individual’s face.
However, carrying out such detection mechanisms is not a walk in the park. For instance, analyzing photos and videos for deepfake scams would take a significant amount of time and money, given the persistent and rapid advancements in deepfake technology. This may well result in a never-ending cat and mouse game, since the technology required to detect touchless automation would constantly have to keep up with the rapid pace at which improvements are being made to such technology.
While insurance fraud is not in any way a new concept, the technology deigned to carry out automated fraud prevention is not being able to keep up with the pace at which touchless automation is being advanced, opening the door to new risks and opportunities. While some insurance companies may prefer compromising automated fraud prevention mechanisms in order to direct costs towards providing a more enriching consumer experience, others may opt for a more secure path and direct costs towards developing software capable of detecting deepfake frauds.
Irrespective of the fact whether solutions to counteract synthetic media are embraced, it is an irrefutable fact that deepfake technology is here to stay. The modes through which digital media is compromised are becoming more complex and therefore, so is detecting such frauds. It is therefore the case that companies are taking steady steps to set up automated fraud detection software.
While it is currently hard to answer the question whether fraudsters would utilize deepfake technology to dupe insurance companies into extracting claims, within the forthcoming few years, it would be become abundantly clear whether or not the insurance companies that are currently delving in developing and installing automated fraud detection software, invested wisely in the present day.
(This post has been authored by Vedant Saxena, a 4th Year law student at RGNUL, Patiala)
CITE AS: Vedant Saxena, ‘Bringing out the Threat of Deepfake Technology to the Insurance Industry’ (The Contemporary Law Forum, 04 May 2023) <tclf.in/2023/05/04/bringing-out-the-threat-of-deepfake-technology-to-the-insurance-industry/> date of access.