A Blog by Jonathan Low

 

Jun 29, 2022

FBI Investigating Use of Deepfakes By People Applying For Remote Jobs

If it's just someone without the requisite skills by using deepfakes to try to get a better paying job, it's fraud. 

But if its a Chinese, Russian or North Korean hacker trying to get access to proprietary technology or intellectual property, it may be a national security threat. JL

Weilun Soon reports in Business Insider:

The FBI said it has received an uptick in complaints about people superimposing videos, images, or audio recordings of another person onto themselves during live job interviews. The complaints were tied to remote tech roles that would have granted successful candidates access to sensitive data, including "customer PII (Personally Identifiable Information), financial data, corporate IT databases and/or proprietary information." 86% of the time, anti-deepfake technologies accepted deepfakes videos as real.

More and more people are using deepfake technology to pose as someone else in interviews for remote jobs, the FBI said on Tuesday.

In its public announcement, the FBI said it has received an uptick in complaints about people superimposing videos, images, or audio recordings of another person onto themselves during live job interviews. The complaints were tied to remote tech roles that would have granted successful candidates access to sensitive data, including "customer PII (Personally Identifiable Information), financial data, corporate IT databases and/or proprietary information," the agency said.

Deepfake videos could be used for entertaining purposes, but they could also be extremely harmful. In March, Meta said it removed a deepfake video that claimed to show Ukrainian President Volodymyr Zelenskyy demanding Ukrainian forces to lay down their arms amid Russia's invasion.

Equally concerning is the harm that private individuals could face from being targeted by deepfakes, as in the cases highlighted by the FBI on Tuesday. "The use of the technology to harass or harm private individuals who do not command public attention and cannot command resources necessary to refute falsehoods should be concerning," the Department of Home Security warned in a 2019 report about deepfake technology.

 

Fraudulent applicants for tech jobs are nothing new. In a November 2020 LinkedIn post, one recruiter wrote that some candidates hire external help to assist them during the interviews in real time, and that the trend seems to have gotten worse during the pandemic. In May, recruiters found that North Korean scammers were posing as American job interviewees for crypto and Web3 startups.

What's new in the FBI's Tuesday announcement is the use of AI-powered deepfake technology to help people get hired. The FBI did not say how many incidents it has recorded.

Anti-deepfake technologies are far from perfect

In 2020, the number of known online deepfake videos reached 145,227, nine times more than a year earlier, according to a 2020 report by Sentinel, an Estonian threat-intelligence agency.

Technologies and processes that weed out deepfake videos are far from foolproof. A report from Sensity, a threat-intelligence company based in Amsterdam, found that 86% of the time, anti-deepfake technologies accepted deepfakes videos as real.

 

However, there are some telltale signs of deepfakes, including abnormal blinking, an unnaturally soft focus around skin or hair, and unusual lighting.

In its announcement, the FBI also offered a tip for spotting voice deepfake technology. "In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," the agency wrote.

The FBI said people or companies who have identified deepfake attempts should report it the cases to its complaint website.

0 comments:

Post a Comment