Understanding Candidate Fraud: Beyond the Hype

In fact, one recent survey found that 71 percent of HR professionals report encountering fake or misleading candidate information, mostly around experience and credentials. The other 29%, presumably, are self employed or completely oblivious (likely, the same cohort who think that HR is the most important strategic function in business or that the mission, vision and values on the “about us” section of their career site accurately reflect reality).
On the candidate side. 44% of job seekers in another recent study admitted to lying to recruiters or hiring managers during the interview process. Pro tip: the point of interviews is telling someone what they want to hear, not what’s factually accurate. Choosing radical transparency in an interview is ethically commendable, but eminently unemployable.
This isn’t artificial intelligence; it’s really a Sisyphesian exercise in social engineering. But human nature isn’t nearly as sexy as machine learning, nor nearly as lucrative – so, here we are.
The AI Double Standard: Digital Twins, Deep Fakes and Other Urban Legends
AI has become a convenient culprit for an inconvenient truth: applying for a job is really, really hard. I’m not talking the holistic hiring process – I’m specifically referring to the actual mechanics of filling out page after page of specious information and repetitive forms. Please submit a resume, then manually reenter it.
If you’re lucky, you’ll hear that they regret to inform you that they’ve decided to move ahead with other candidates whose experience and skills more closely align with the position. Another lie – they probably just hired some executive referral and that application was actually just a compliance exercise (that job posting was likely a lie, too – but the government requires every position to be publicly posted for a couple days, even if they already know who they’re gonna hire).
Enter workflow process automation – or, in another case of misrepresentation, you’re probably more familiar with as “artificial intelligence.” As you know, this makes recruitment less of an interpersonal exercise in finding the right person to hire, and more of an exercise in algorithms stack ranking the applicants who you shouldn’t hire, which is, you know, pretty much everyone. The ATS black hole used to be a myth. Now, it’s a best practice.
It’s also the reason why the recruiting industry is in the throes of an existential crisis.
A Checkr survey of around 3000 hiring managers found that 60% had uncovered material misrepresentation in candidate experience or skill sets; around 15% of those, however, were unable to provide any concrete proof or evidence of this purported “fraud,” but the great thing about AI is how it totally removes human bias from the hiring process, driving objectivity and standardized decision matrixes.
Now, even if 3 in 5 hiring managers actually did encounter a fraudulent candidate, this seemingly shocking statistic is proof of nothing more than stasis and the status quo. In fact, it’s the same story that recruiters have been tellling for decades now, and by candidates for infinitely longer.
The only difference is gut feeling has been replaced by algorithmic bias, which is somehow an acceptable form of unconscious bias – the very same thing companies paid billions of dollars to train hiring stakeholders how to avoid during the go go DEIBTQ+ days.
It’s more variations on the same asinine theme – only we’ve been replaced by machine learning, and in the process, have learned nothing.
Because the fake candidate “threat” is basically the latest iteration of what’s a microcosm of the ridiculous hype cycles that drive the talent acquisition conversation: choosing anecdotal over empirical evidence, and hyperfocusing on outlying anomalies rather than core capabilities.
Say Hello to My Little Trend.
These “trends” are driven largely by vendors, analysts and thought leaders, whose models are mostly solutions looking for problems – not by real practitioners doing the real work, but by LinkedIn hot takes, misappropriated mass media (we’ll get to the North Koreans in a second) and content marketing that’s more category creation than customer success.
I’m not going to go too much into this here, but if you want to talk about fraud in recruitment, look at the obscene margins most vendors are making for doing basically nothing. 60% is pretty decent when it’s running payroll on OPM, or creating what’s effectively a shell corp to shield employers from talent related risks without adding any actual value, or for driving a high volume of unqualified traffic from paid media to career sites with near absolute bounce rates.
That’s fraud. This other stuff is a vocational urban legend.
Take another “headline” stat I saw recently, this one in a major media outlet (or one that’s pretending to be).
“17% of hiring managers who responded to our recent survey (note: the sample size was 122, most of whom work at closely held, middle size companies) say they’ve encountered deepfake technology in interviews.”
– Forbes
If true, that’s an interesting data point. But that’s also the same percentage of candidates who reported always hearing back on every role they apply for, and that’s been automated for a good couple decades now.
The reason employers rarely send out mass dispositions (or 83% of them, anyways) despite it being a native feature in most systems is simple: laziness mixed with hubris.
Which is sort of like trying to audit their candidate pipeline or applicant quality and coming to the conclusion that it’s actually fraudulent candidates, not hiring processes ptimized to create as much friction as possible, or poorly targeted job ads, or a labyrinthine apply process necessitated by a legacy ATS that should have been sunsetted years ago.
Sure, Jan.
A quarter of companies recently reported being targeted by identity fraud in hiring, which sounds scary, until you remember that “identity fraud” can also include minor infractions like mismatched credentials or using descriptive job titles rather than some random internal convention, like banks (wtf is an Analyst VI, or a Global AVP) or, conspicuously, recruiters, who are also known as “sourcers,” “employer brand specialists,” “talent ops,” “staffing consultants,” “candidate developers” or even “casting directors.”
Those are all recruiting roles with some superficial variances, but if you were to claim that you were a recruiter on your resume as opposed to one of these esoteric title conventions, then guess what? That’s identity fraud, brother.
So, we’re not talking about a global crime ring, wanted fugitives, nefarious foreign knowledge workers or state espionage assets flooding the workforce. We’re talking the kind of thing that gets flagged by background check providers and ignored by the companies initiating them, like mixing up start dates or office locations.
I use these seemingly non sequiteur examples primarily because no peer reviewed, longitudinal published study proving widespread candidate fraud exists – but it really always has, and its manifestations are minor and relatively meaningless.
AI enables these minor screwups at scale, but to present this as a major issue that’s an imminent threat to disrupt how hiring happens is a cynical attempt to create enough urgency to sell software and services (or SaaS, but apparently that’s dead now?).
The VC Echo Chamber Effect
At the risk of sounding repetitive, let me again assert that this is more of a commoditized product feature than a profession wide crisis. There exists an entire cottage industry of vendors, consultants, and, most importantly, venture capitalists who have a direct monetary incentive in cranking up the hype machine and turning what’s (at best) a marginal annoyance into a major crisis.
Here’s the thing: if hiring was suddenly overrun by deep fakes (other than Robert Half recruiters), then you need, well, deepfake detection. Which sounds really cool, even if not one member of their collective ICP has any idea where they’d even start when it came to creating an RFP for potential solution providers – and, thankfully, even less of an idea about pricing, packaging or industry standard SLAs.
Similarly, if candidates are increasingly AI generated, then you need AI to fight AI (this was my biggest takeaway from the plot of Terminator 2).
“You can’t beat the machines without learning how to fight the machines.”
– John Connor
This all creates a tidy little late stage capitalist feedback loop. Hype triggers fear. Fear triggers stupid, big ticket purchases, as most gun owners, survivalists or Ring camera subscribers would probably tell you.
And when you’re dealing with enterprise software size budgets instead of discretionary consumer goods, well, you’re probably already raising millions of dollars from VCs, who then plow even more money back into validating and scaling a solution that probably shouldn’t have gone to market to begin with (see: talent communities, recruiting specific CRMs, video interviewing products).
Magic Quadrants and Magical Thinking
As these commoditized categories start gaining traction in the market, and move from the margins to the mainstream (looking at you, blockchain based employment credentials), they finally have enough runway to purchase the ultimate stamp of approval: analyst validation.
And, if you know how the big “analyst” firms work, then you should probably save that fraud induced indignity, because this is where recruiting technology really has a deep fake problem. CHROs, VPs of TA and other senior level decision makers, as a rule, look to analysts to validate and legitimize these new sources of spend – and analyst firms are only too happy to oblige.
That’s because they’re not objective, third party observers, as many believe them to be – they’re global services organizations that rely on selling access to their proprietary research for six or seven figures to companies making tech purchasing decisions, again, providing obvious motivation to play the game rather than blow the whistle, which is way less lucrative (trust me).
Safe to say, these big, blue chip firms are widely viewed as credible, and have an inordinate amount of influence on purchasing decisions and buying behavior within the TA sector, but they’re also completely pay for play, and have obvious profit motives. Take good old Gartner, for example, an independent advisory firm that booked $6.5 billion (yeah, billion, with a “B”) in profits last year alone.
With profits forecast to continue rising 4-5% year over year until 2030, you better believe that Gartner (and its erstwhile competition) is bullish on AI – it’s as lucrative for them as anyone on Sand Hill Road. They’ve been one of the most bullish analyst firms when it comes to AI, which makes sense, since it’s become the predominant focus of their company coverage, analyst reports and market mapping across the many verticals they serve.
The fact that they’re starting to cover candidate fraud detection and deep fake solutions should tell you something about the legitimacy of this category, particularly coming from a company that previously predicted:
- That more than 3 million global workers would be supervised by “robo-bosses” [their phrase] by 2018.
- That “one in three jobs will be converted to software, robots and smart machines by 2025.”
- That over 40% of agentic AI projects would be cancelled by 2027 (they still have two quarters to validate this claim, but it’s not looking likely, since most companies are still in the early stages of experimentation)
That’s necessary context to understand, that when you see claims like, “Gartner predicts that by 2028, one in four candidate profiles will be fake,” well, fake candidates aren’t the most blatant form of fraud you should focus on detecting.
These projections are, like most forward looking statements, basically sponsored guesswork, and in no way reflective of current reality. Inevitably, though, these misleading and fanciful guesses in creative mathematics get cited ad nauseum in sales collateral, investor materials, conference presentations, webinars and white papers.
It’s like reporting on today’s weather by looking at long-term projections on climate change, or completing an employee’s annual performance report using only data from their automated pre-hire assessments. And yet, meanwhile, the actual, measurable impacts (or statistical prevalence) of fraudulent applicants and fake candidates is unproven – and highly concentrated in certain sectors and roles.
The FBI has, of course, documented hundreds of firms where companies hired bad actors who used fake identities, including many prominent cases involving state operatives from North Korea and Russia, mostly, although Iran and China have also emerged as big players in the fake candidate game.
After Wired ran a cover piece on this (very specific, and very rare) occurrence, the floodgates opened; when recruiting can compromise state secrets or national competitiveness, the repercussions are, obviously, serious and should be taken as such.
But this isn’t new – in fact, the North Koreans have been using black hat techniques to fund the Kim regime since the early days of the internet. This includes their $1B 2016 raid of the national bank of Bangladesh, or their 2014 hack of Sony’s enterprise systems, leaking everything from HR records to sensitive internal emails as retaliation for the release of “The Interview.”
The list goes on, but really, the threat of this being an issue as an employer is virtually minimal. It’s not like state operatives are applying for marketing coordinator openings or hourly retail gigs, or any of the 99.9% of open roles that don’t require security clearance or have access to sensitive information or centralized financial systems.
This is one instance, though, where it’s not better to be safe than sorry. Worrying about – or actively combatting – this issue is actually just sorry.
This is what happens when edge cases get turned into market narratives.
Trust, but E-Verify.
The bigger issue isn’t fake candidates. It’s what companies are doing in response.
Every time this topic comes up, the instinct is to add more layers. More verification steps, more screening tools, more friction. It feels responsible. It also quietly punishes real candidates far more than it stops bad ones.
Hiring already asks people to jump through absurd hoops. Some ATS workflows require over 100 interactions on mobile just to submit an application. Now imagine adding identity verification, AI monitoring, and behavioral analysis on top of that.
At some point, you’re not filtering out fraud. You’re filtering out humans.
There’s also a labor cost that no one really talks about. Every minute a recruiter spends trying to detect AI manipulation is a minute they’re not spending evaluating actual candidates. Multiply that across teams and you start to see the real impact. That’s an opportunity cost no employer can really afford, particularly in today’s market.
The fundamental irony, of course, is that most organizations still struggle with hiring basics. Even with the rise of AI; time-to-fill is up, candidate experience is still mediocre at best (although these anti-fraud measures are making it markedly worse, and are antithetical to the core concept of providing a positive candidate experience), and, to top it all off, cost per hire continues to skyrocket upwards, almost as quickly as churn and regrettable attrition.
And yet, somehow, we’ve decided the biggest problem to solve is whether a small percentage of candidates might be using a voice filter. RIght.
There’s a stat that should probably get more attention than any deepfake headline: only 26 percent of candidates trust AI to evaluate them fairly, according to another recent poll.
That’s the real trust problem – not fake candidates or fraudulent applicants.
Candidates don’t trust employers, and rightfully so. Employers don’t trust candidates, with far less justification. And, of course, both sides are increasingly relying on technology they don’t fully understand to evaluate each other and determine employability. That’s not a deep fake issue – that’s a superficial systems problem.
And it’s not going to be fixed by buying some fraud detection tool to bolt onto your stack.
Running Remote: Who Are You Really Hiring?
I know what you’re thinking – and no, I’m not completely dismissing the entire concept of candidate fraud and fake applicants. This stuff does happen, and fraud definitely exists (I know, I watched Josh Bersin present twice last month).
The thing is, fraud is evolving, and in certain contexts, particularly remote technical roles or highly sensitive work environments, there’s room for legitimate concern, which is why security clearances and background checks have existed since the Eisenhower administration.
But the industry has a habit of taking real, but really niche and highly limited ‘problems’ and inflating them into impending crises of apocalyptic proportions. This feels like one of those moments – but the good news is, it’s not too late to realize that if you’re looking for fraud, this category is an ideal place to start investigating.
Which is exactly why I’m looking forward to moderating today’s panel at Running Remote 2026 in Austin, titled, in true clickbait style, Who Are You Really Hiring? AI, Deepfakes, and Trust at Scale.
I’m moderating, and the goal isn’t to scare people, nor to completely dismiss any fears they might have about this rapidly evolving and highly complex topic. My objective is to actually unpack what’s real, what’s overstated, and what hiring teams should care about right now.
The panel includes people who know this topic way better than I do – and who truly believe that this is a problem that desperately needs solving: Ophir Samson, founder of Ezra.ai; Tiffany Hindman, VP of Global Sourcing and Talent Delivery from ServiceNow, and Ben Colman, CEO and President of erstwhile startup Reality Defender.
These are people dealing with the problem from very different angles, which usually leads to a more honest conversation than the standard conference echo chamber. There will also likely be some dissention and a few hot takes, too – which usually leads to a more interesting conversation than the standard conference echo chamber, too.
It’s happening today at 1:30 PM CT on the Deep Dive Stage; if nothing else, this is one conversation that should, in theory, at least separate signal from noise.
And right now, there’s a lot more noise than anyone in hiring wants to admit. Here’s hoping they’re still listening.


