Ai Editorial: Detecting deepfakes to combat identify fraud

15th November, 2019

Ai Editorial: Deepfakes supported by AI techniques today are considered to be a growing problem. It is vital to build AI systems that can automated deepfake detection so that risks such as identity fraud can be tackled, writes Ai’s Ritesh Gupta

 

Artificial intelligence (AI)-based identity fraud is emerging as a serious issue. Recognition of one’s voices and face as a way to validate a person’s identity is under scrutiny with the rise of synthetic media and deepfakes. Be it for security-related risks, user privacy concerns or fraudulent transactions, repercussions are being probed at this juncture.

Technology to manipulate images, videos and audio files is progressing faster than one’s ability to tell what’s real from what’s been faked. According to the findings of a study released last month, the number of deepfake videos almost doubling over the last seven months to 14,678.

The level of sophistication with which fraudsters are moving ahead is exemplified by the recent case in which an executive was duped into transferring $243,000 to a bank account, or even the news of top AI-researchers in the U. S. struggling to cope up with computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign. Such cases of fake phone call or a video file show how deepfake techniques are encroaching in the lives of the people in a wrong way.

Deepfakes are powered by deep learning AI. The algorithms behind this AI are fed large amounts of data. Eventually, by capitalizing on such data, “deepfake” videos manipulate audio and video using AI to make it appear as though someone did or said something they didn’t. It does pose a challenge to validating the legitimacy of information presented online.

The case in China

Zao, a free deepfake face-swapping app, not only exemplified how quickly deepfakes have gone mainstream but also triggered a privacy backlash amid concerns about identity theft. The Chinese app allows a user to use their photographs and then its AI engine changes their faces with those of celebrities featuring in video clips. Zao amended its policies, and stated that the app will not store the biometric information of users and transferring of data wouldn’t be done without consent.

This privacy storm was mainly in China, but the threat of this trend was acknowledged everywhere since the app indicated how the technology is now available for smartphone users. In no time, questions were raised about the possibility of payment-related fraud, too. With biometric technologies such as Alipay’s ‘Smile to Pay’ being increasingly adopted as a form of payment across China, the concerns were valid. Alipay currently serves over 1 billion users. Ant Financial Services Group, which operates Alipay, stated that its facial recognition capabilities were safe and its facial payment system won’t be breached. It also emphasized that the team has implemented rigorous, best-in-class privacy, security and risk control processes.  

What is coming under inspection is the efficacy of biometric security measures such as the voice and facial recognition. Can it be compromised by deepfakes that can almost perfectly imitate these features of a person?

Combatting threats

Initiatives are in the pipeline, focusing on automated deepfake detection.

Identity verification specialist, Jumio highlighted that it is “vitally important to embed 3D liveness detection into identity verification and authentication processes”. The company is working on plans to combat advanced spoofing attacks including deepfakes. Its offering was recently introduced as a beta.   

Facebook was recently in news for working on a ‘de-identification’ technology to morph a person’s face so that they remain unrecognisable to facial recognition technology.

Amazon Web Services (AWS), Facebook, Microsoft and other organizations have recently committed to initiatives that encourage work on technology that can be deployed to better detect when artificial intelligence has been used to alter a video in order to mislead the viewer. AWS has indicated that building deepfake detectors will require novel algorithms which can process a vast library of data (more than 4 petabytes). Established organizations have chosen to collaborate as it is being widely acknowledged that it is important to have data that is freely available for the community to use. For instance, Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to a challenge. No Facebook user data will be used in this data set, according to the company. Concrete results, especially better detection tools, are being awaited as the likes of Facebook and Amazon admit that identifying manipulated content and deepfakes is a technically demanding and rapidly evolving challenge. 

Deepfakes aren’t fading away, and their consequences are being felt on a global scale.

 

Hear from fraud prevention and cybersecurity experts at Ai’s next ATPS –

http://www.airlineinformation.org/upcoming-events2/370-2020-conference-dates.html