Recently, CNBC disclosed that there has been a surge of “fake job seekers” relying on AI Depth Forgery technology to impersonate others.
According to reports, Pindrop Security, a U.S. voice authentication startup, recently posted a remote engineer position online, where a Russian programmer named “Ivan” stood out with his perfect resume and successfully obtained an interview opportunity. However, during the video interview, the recruiters noticed a subtle mismatch between his facial expressions and his speech. The truth soon came to light: he was a job seeker impersonating someone using AI technology.
Pindrop CEO and co-founder Vijay Balasubramaniyan stated that the candidate—whom the company later referred to as “Ivan X”—utilized generative artificial intelligence and forgery technology, including deepfake faces and synthetic voices, in an attempt to deceive their way into the company’s systems. He may even be located in a military facility area bordering Russia and North Korea, rather than in what he claimed was western Ukraine.
The proliferation of fake job seekers
The company has long faced threats from hackers, and now, new cybersecurity vulnerabilities are emerging in the human resources sector.
Job seekers are using AI tools to create fake identification, fabricate resumes, and even train AI models to handle interview questions. Their motivations vary: from demanding salaries, stealing customer data, to implanting ransomware, and even participating in state-level cyber espionage activities.
Research and consulting firm Gartner predicts that by 2028, approximately one quarter of job applicants worldwide will be fraudulent.
Especially in the cybersecurity and cryptocurrency industries, many positions are available for remote work, becoming a “hunting ground” for criminals. Lili Infante, founder of CAT Labs, stated that every time they post a job advertisement, they almost always receive “about 100 applications from North Korean spies.” “Their resumes look flawless, covering every keyword we require.”
BrightHire provides video interview analysis services to more than 300 companies in the financial, healthcare, and technology industries. Ben Sesser, the company’s chief executive, noted that there has been a “sharp rise” in the number of fake job applicants so far this year. “Recruitment is a human-driven process, which makes it the most vulnerable link in the system.”
These impostors typically exploit stolen American identity information, disguise their geographical location through remote VPNs, and successfully pass multiple rounds of interviews with the help of AI-generated avatars and background profiles. In some cases, they can even achieve “star employee” performance.
Last October, the cybersecurity company KnowBe4 accidentally hired a “employee” with a forged identity. The individual used AI-generated avatars and stolen identity information to successfully pass background checks, including four video interviews, until suspicious activity in their account raised the company’s alert.
In May last year, the U.S. Department of Justice also accused over 300 companies of inadvertently hiring fake IT employees linked to North Korea, resulting in millions of dollars in salary outflows. The FBI even issued a wanted notice for the relevant “employees.”
According to reports, participants in this industry are no longer limited to North Korea; criminal gangs may also come from Russia, Malaysia, and South Korea.
Technology War and Trust Crisis
“Generative AI blurs the boundaries between humans and machines,” Balasubramaniyan said, “We are witnessing the rise of a new threat: job seekers not only forge identities and experiences, but even during the interview process, they are also fake.”
Faced with the rapid evolution of deepfake technology, companies are gradually shifting towards technical countermeasures. For example, Pindrop has deployed a self-developed video authentication system that successfully identified the disguise of “Ivan X.”
Identity verification has also become an emerging track, with companies such as Jumio, iDenfy, and Socure providing employers with identity verification services based on biometric and behavioral analysis, helping employers identify false candidates.
Despite judicial intervention and media disclosures, most corporate HRs still do not view this issue as a significant risk. Sesser pointed out: “They focus on talent strategy but do not invest significant effort in security defenses. Many companies may have already hired fake employees without realizing it.”
Balasubramaniyan also warned: “We can no longer rely solely on our eyes and ears to determine what’s real. Without technical assistance, humans are like guessing the future with a coin.”
(Source: International Finance News)
Source: Eastmoney
Author: International Financial News
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
U.S. tech companies face a wave of job applications from "AI impostors"
Recently, CNBC disclosed that there has been a surge of “fake job seekers” relying on AI Depth Forgery technology to impersonate others.
According to reports, Pindrop Security, a U.S. voice authentication startup, recently posted a remote engineer position online, where a Russian programmer named “Ivan” stood out with his perfect resume and successfully obtained an interview opportunity. However, during the video interview, the recruiters noticed a subtle mismatch between his facial expressions and his speech. The truth soon came to light: he was a job seeker impersonating someone using AI technology.
Pindrop CEO and co-founder Vijay Balasubramaniyan stated that the candidate—whom the company later referred to as “Ivan X”—utilized generative artificial intelligence and forgery technology, including deepfake faces and synthetic voices, in an attempt to deceive their way into the company’s systems. He may even be located in a military facility area bordering Russia and North Korea, rather than in what he claimed was western Ukraine.
The proliferation of fake job seekers
The company has long faced threats from hackers, and now, new cybersecurity vulnerabilities are emerging in the human resources sector.
Job seekers are using AI tools to create fake identification, fabricate resumes, and even train AI models to handle interview questions. Their motivations vary: from demanding salaries, stealing customer data, to implanting ransomware, and even participating in state-level cyber espionage activities.
Research and consulting firm Gartner predicts that by 2028, approximately one quarter of job applicants worldwide will be fraudulent.
Especially in the cybersecurity and cryptocurrency industries, many positions are available for remote work, becoming a “hunting ground” for criminals. Lili Infante, founder of CAT Labs, stated that every time they post a job advertisement, they almost always receive “about 100 applications from North Korean spies.” “Their resumes look flawless, covering every keyword we require.”
BrightHire provides video interview analysis services to more than 300 companies in the financial, healthcare, and technology industries. Ben Sesser, the company’s chief executive, noted that there has been a “sharp rise” in the number of fake job applicants so far this year. “Recruitment is a human-driven process, which makes it the most vulnerable link in the system.”
These impostors typically exploit stolen American identity information, disguise their geographical location through remote VPNs, and successfully pass multiple rounds of interviews with the help of AI-generated avatars and background profiles. In some cases, they can even achieve “star employee” performance.
Last October, the cybersecurity company KnowBe4 accidentally hired a “employee” with a forged identity. The individual used AI-generated avatars and stolen identity information to successfully pass background checks, including four video interviews, until suspicious activity in their account raised the company’s alert.
In May last year, the U.S. Department of Justice also accused over 300 companies of inadvertently hiring fake IT employees linked to North Korea, resulting in millions of dollars in salary outflows. The FBI even issued a wanted notice for the relevant “employees.”
According to reports, participants in this industry are no longer limited to North Korea; criminal gangs may also come from Russia, Malaysia, and South Korea.
Technology War and Trust Crisis
“Generative AI blurs the boundaries between humans and machines,” Balasubramaniyan said, “We are witnessing the rise of a new threat: job seekers not only forge identities and experiences, but even during the interview process, they are also fake.”
Faced with the rapid evolution of deepfake technology, companies are gradually shifting towards technical countermeasures. For example, Pindrop has deployed a self-developed video authentication system that successfully identified the disguise of “Ivan X.”
Identity verification has also become an emerging track, with companies such as Jumio, iDenfy, and Socure providing employers with identity verification services based on biometric and behavioral analysis, helping employers identify false candidates.
Despite judicial intervention and media disclosures, most corporate HRs still do not view this issue as a significant risk. Sesser pointed out: “They focus on talent strategy but do not invest significant effort in security defenses. Many companies may have already hired fake employees without realizing it.”
Balasubramaniyan also warned: “We can no longer rely solely on our eyes and ears to determine what’s real. Without technical assistance, humans are like guessing the future with a coin.”
(Source: International Finance News)
Source: Eastmoney
Author: International Financial News