When a recruiter at Atlanta-based voice authentication startup Pindrop Security interviewed a seemingly qualified engineering candidate named Ivan over video, something felt off. Ivan’s facial expressions didn’t quite match his speech. The company’s internal systems flagged his IP address not from Ukraine, as he claimed, but thousands of miles away, possibly inside a Russian military facility near the North Korean border.
Ivan wasn’t just lying about his location. He wasn’t even real.
Source: Pindrop Security
According to Pindrop CEO Vijay Balasubramaniyan, the man dubbed “Ivan X” by the company was a scammer using deepfake technology to impersonate a job seeker. His face and voice were AI-generated. The goal wasn’t simply to land a job. It was to infiltrate the company.
“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”
The Infiltration Economy
This isn’t a one-off. Research firm Gartner warns that by 2028, 1 in 4 job candidates globally could be entirely fake, enabled by generative AI, deepfake tools, and weak digital hiring processes. And these aren’t benign tricksters trying to cheat a résumé screen. They’re strategic actors with bigger goals.
We spoke exclusively to Michael Matias, CEO of Clarity, an AI cybersecurity firm tackling deepfakes and social engineering attacks to understand what’s enabling this shift. He was blunt:
“The primary motive is often infiltration. We’re seeing attackers pose as job applicants not just to land a paycheck, but to gain access — to systems, data, and internal networks. It’s the evolution of phishing: instead of tricking someone into clicking a link, the attacker becomes the employee.”
It’s already happened. In May, the U.S. Department of Justice revealed that over 300 American companies, including a defense manufacturer, a major television network, and a Fortune 500 automaker, had unknowingly hired impostors with ties to North Korea for remote IT jobs. Using stolen identities and VPN cloaking, these fake employees funneled millions in wages back to Pyongyang’s weapons programs.
KnowBe4, a major cybersecurity firm, admitted in October that it, too, had been duped. The company hired a North Korean software engineer who used AI-altered stock photos and a stolen American identity to pass background checks and four video interviews. He was only caught after internal systems flagged suspicious network behavior.
Remote Hiring
The surge of AI-assisted fraud is exposing a fundamental weakness in how companies, particularly remote-first ones, hire. Most businesses rely on résumés, LinkedIn profiles, and brief video calls. But as Matias noted in our interview, “Companies rely on static checks like resumes and LinkedIn profiles, but those can be faked. What’s needed is lightweight, continuous verification: brief live interactions, biometric cues, and smart integrity signals — that don’t slow things down but make fraud much harder to scale.”
In other words, even the most rigorous interview processes can be gamed. That’s what happened to Vidoc Security, a tech startup that nearly hired not one, but two AI impostors for backend engineering roles. In the first case, the candidate passed the technical screen with flying colors. But during a final interview, the team realized he spoke no Polish despite claiming to be from Poland and his on-screen appearance seemed digitally off. A second, similar incident followed, this time captured on video.
Source: Deepdives Newsletter
A New Kind of Cybersecurity Threat
The line between talent acquisition and security breach is vanishing. That’s why Matias believes cybersecurity teams should have a seat at the hiring table.
“It’s not overstepping — it’s overdue,” he told us. “Hiring is now a risk surface. If someone can infiltrate your company through a fake identity, that’s a security breach, not just an HR miss. Cyber and HR must collaborate. Think of it like finance and fraud teams working together: different lenses, same mission — protect the company.”
At CAT Labs, a Florida-based startup that sits at the intersection of crypto and cybersecurity, founder Lili Infante has stopped being surprised by fake applicants.
“Every time we list a job posting, we get 100 North Korean spies applying to it,” she said. Their resumes, she added, are “amazing” packed with the right keywords, achievements, and profiles polished to perfection. Her company now leans on ID verification vendors like iDenfy, Jumio, and Socure to help screen out impostors.
The problem has gone global. According to cybersecurity consultant Roger Grimes, fraud rings now span Russia, China, Malaysia, and South Korea. Sometimes, ironically, the fake hires perform well enough to cause second thoughts about firing them. “I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.
The ‘Ivan X’ Moment
For Pindrop, the Ivan X episode may have cemented the company’s next pivot. Initially founded to detect fraud in voice interactions, the firm backed by Andreessen Horowitz and Citi Ventures is now investing in video authentication tools. Balasubramaniyan is blunt about the stakes: “We are no longer able to trust our eyes and ears,” he said. “Without technology, you’re worse off than a monkey with a random coin toss.”
BrightHire’s Sesser believes that despite a few high-profile cases, many companies still don’t realize they’re already being targeted. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on,” he said.
So what now? Gartner’s forecast that one-quarter of global job applicants could be fake within three years forces a hard rethink of how businesses build trust in hiring. Matias suggests looking to zero-trust security principles as a blueprint.
“We need to apply the same mindset from zero-trust security to hiring: don’t assume authenticity — validate it continuously. In a world of global, remote-first teams, trust can’t be based on a résumé or a Zoom call. It needs to be earned and re-earned through signals — like behavioral consistency, live identity verification, and integrity checks — just like we do for devices and networks. Identity is now a critical control point.”
The bigger risk now is complacency. As Matias puts it, “Deepfake hiring isn’t a futuristic threat — it’s already here. If you’re hiring remotely, especially at scale, you’ve likely already encountered a fake applicant — you just didn’t catch it. Complacency is the biggest vulnerability. The companies saying ‘this can’t happen to us’ are often the ones it’s already happening to.”