Unpacking the Myth of Ethical AI in HR Tech

Let’s get one thing out of the way: the term “ethical AI” is about as meaningful as “video interviewing intelligence” or “programmatic recruitment advertising.” It sounds good. It tests well. It makes people feel better about decisions they already regret. But under the hood? It’s just spin.
There’s no universally accepted definition of “ethics” — not in philosophy, not in business, and definitely not in machine learning. So when a vendor slaps “ethical AI” on a product sheet or company blog, it’s usually less about principles and more about PR.
And here’s the kicker: the louder a company shouts about building ethical AI, the more likely it is they’re trying to distract you from the fact that their AI doesn’t work.
Or worse — that they’ve spent the last decade cutting corners, exploiting data, or selling snake oil. Turns out the only consistent “ethos” among these vendors is a rich history of building products that are as broken as their moral compasses.
Ethics Are Subjective. AI Is Math. “Ethical AI” Is Marketing.

Here’s a philosophical gut punch: there is no such thing as universal ethics. What’s ethical in one culture is problematic in another. Kant, Bentham, St. Augustine, Nietzsche, Spike Lee — pick your flavor of moral philosophy.
None of them agree on what “doing the right thing” actually means.
Which makes the idea of “embedding ethics” into an algorithm basically like trying to code empathy in Python — subjective, context-dependent, and not even remotely scalable. And if you understand the above references, we should definitely be friends.
The thing is, ethical principles vary wildly across individuals, industries, and geographies.
Europe has GDPR and a risk-based AI Act. The U.S. has, well, we have “vibes” (not always good vibrations, granted) and maybe a strongly worded blog post from the FTC or a public statement from some second tier law firm trying to drum up a class action lawsuit.
Conversely, China has tight surveillance wrapped in “security,” as do Russia, North Korea and other autocratic police states, like Texas, for example.
Meanwhile, most HR Tech vendors peddling “ethical AI” just have a slide deck written by a junior marketer with a liberal arts degree and a basic ChatGPT prescription.
A Brief History of Ethics and HR Technology
The companies banging the ethical AI drum the loudest tend to have three things in common:
- A history of building mediocre or failed products.
- A public record of shady practices — data misuse, false claims, discrimination.
- A sudden pivot into “responsibility” only after getting called out, sued, or passed over in RFPs.
Got it? Cool. Now, let’s go on a quick historical tour of some of HR tech’s finest purveyors of “ethical AI,” like a B2B Bill and Ted’s Excellent Adventure, although none have ever chosen to “be excellent to each other.” There are, however, some very strange things afoot when it comes to ethics in HR Tech.
I could come up with these kinds of case studies ad infinitum, but here are three examples that effectively represent the gaping disconnect between messaging and morality in our industry.
LinkedIn: Professional Networking Meets Algorithmic Discrimination
LinkedIn loves to talk about its “Responsible AI” principles. They’ve published blog posts, policy updates, and even launched an internal “responsible AI” team. Which is great — until you realize that some of LinkedIn’s most visible AI features have been anything but responsible.
Take their “People You May Know” algorithm. In 2016, ProPublica reported that LinkedIn’s suggestions were helping recruiters exclude Black-sounding names and promote network-driven homogeneity — reinforcing bias under the guise of personalization.
Despite these criticisms, the company doubled down on its black-box matching models and recommendation engines – as evidenced by the title of a January 2025 BBC story, “LinkedIn Accused of Using Private Messages to Train AI.” Because, of course they do.
And while LinkedIn claims its AI is “fair and inclusive,” a study from MIT Technology Review found that its job-matching algorithm was giving different results to men and women, even when their profiles were identical. LinkedIn’s fix? Minor tweaks and a statement about “continually improving.”
Sure, Jan.
Meanwhile, HR leaders using LinkedIn’s Recruiter product have long complained, mostly in private, of course — they all still pay half their annual budgets for access to the world’s largest digital Rolex) that the search filters often surface the same candidates over and over again, with little transparency on how the “AI” decides who’s relevant.
That’s not “ethical AI” — it’s just a rerun of every bad CRM or ATS system you’ve ever used, but now with more buzzwords and a heftier pricetag.
But hey, at least they’ve got a Responsible AI landing page and a few Diversity & Belonging webinars. That practically counts as compliance, right? And let’s give credit where credit is due – compliance can’t be easy when you’re filing a WARN notice and doing a RIF pretty much every other week.
HireVue: The OG of Algorithmic Overreach
HireVue was once the poster child for AI interviews — using facial recognition and vocal tone analysis to predict job success, a “scientifically validated,” proprietary capability that was, in fact, less scientifically validated than, say, phrenology or intelligent design (the latter being the antithesis of whatever went into HireVue’s product strategy).
To their credit, they made it sound pretty legit, until the Electronic Privacy Information Center (EPIC) filed a complaint with the FTC alleging the system was opaque, unscientific, and borderline Orwellian.
They eventually dropped facial analysis — not because it didn’t work, but because they couldn’t prove it didn’t discriminate. But instead of admitting fault, they rebranded the move as an “ethical decision.” Classic.
That’s like saying you “chose” to shut down your restaurant after the health inspector found rats in the fryer. Or that your “ethical” software hasn’t been repeatedly cited in federal civil rights complaints for enabling discriminatory hiring practices.
Algorithmic Bias: Yes, Indeed.
Indeed positions itself as a mission-driven company that wants to “help people get jobs,” a questionable claim, statistically speaking.
But when you peel back the veneer of their “ethical AI” messaging, what you find is a platform that’s spent years quietly making hiring less transparent and more exclusionary — all while preaching fairness and inclusion.
Take their resume scoring and job matching algorithms. These are designed to “connect the right candidates to the right jobs,” which sounds great until you realize there’s no real visibility into how those decisions are made.
Candidates get ghosted not because they’re unqualified, but because an opaque algorithm decided they weren’t “relevant” enough. Relevance, of course, being a proprietary metric no one outside of Indeed gets to define — not even the employers footing the bill. At least after that whole PPC debacle.
Indeed came under fire for allowing discriminatory job ads on its platform, including listings that excluded older workers and used proxy language to filter out women or minorities. The company responded with the usual “we take this seriously” boilerplate, but offered little in the way of systemic change. Instead, they introduced more automated moderation — powered, naturally, by AI.
And don’t forget the infamous pay-to-play model, where employers can boost job visibility by paying for sponsored listings. That’s not just a revenue strategy — it actively skews the hiring process. Candidates matched to jobs aren’t necessarily the best fit. They’re just the best funded. That’s not ethical AI. That’s just Google Ads for job boards wearing a halo.
If you’re going to claim to be building “ethical AI” while profiting from opaque decisions, systemic bias, and pay-to-play rankings, maybe try not doing the exact opposite in your core business model. Or at least come up with a slogan that doesn’t make irony physically painful.
At least their parent company, Recruit, hasn’t done anything nefarious in their storied history, like leading the overthrow a Japanese Prime Minister and his entire administration or their purported money laundering for the Yakuza, long rumored to have a significant financial stake in a company that processes over 90% of all HR transactions in one of the world’s economies.
Situational Ethics and Applied Morality in HR Tech
Here’s the problem with relying on vendors to be the moral compass of your hiring stack: they’re not accountable to your values. They’re accountable to revenue. That means the same company preaching “responsibility” today is probably the same one that was scraping candidate data without consent or buying lead lists from shady brokers three years ago.
Ethics aren’t a feature. They’re a framework. And if your internal hiring practices are flawed, no amount of “AI fairness tuning” will save you from the consequences.
Instead of outsourcing your values to a tech vendor, try this radical idea: define them yourself.
- Want your hiring process to be fair? Audit it. Regularly. I know – genius idea.
- Want your tech to be transparent? Demand access to the models or at least third-party explainability. This is particularly important for any “agentic AI” use case, since the fundamental use case, and foundational
- Want to avoid lawsuits? Don’t blindly trust the vendor who calls themselves “ethical” the loudest.
Let’s be honest. “Ethical AI” in HR tech is just the new “candidate experience.” No one’s going to take the contrarian point of view that ethics are bad, making deeper conversations or critical analyses of these claims conveniently specious..

Thing is: ethics are neither good, nor bad – it’s a framework that’s abstract, dynamic and highly situational. Hitler and Larry Ellison are both “ethical” – it’s just those ethics likely aren’t aligned with yours, unless you’re a facist or a patent attorney. The same goes for any vendors’ claims of “ethical AI.”
It’s a meaningless buzzword, but we can agree that it’s fairly unethical to use this verbiage in essentially mask the fact that most of these products are glorified resume sorters duct-taped to a CRM, creating and selling “solutions” to fix the very problems their platforms helped create.
Just as in philosophy, legally, there are no standardized “ethical” benchmarks for algorithmic decision-making in hiring — not in the U.S., and, surprisingly, not globally, either. In terms of case law and legal precedents, the closest we’ve gotten is the EU AI Act, which classifies hiring tools as “high-risk” and requires transparency, audit trails, and human oversight.
But that’s Europe. In the U.S., conversely, most vendors are governed by whatever their lawyers can justify and whatever their Series C lead investor will tolerate. So, while vendors posture about ethics, most are just trying to get ahead of regulation — not because they care, but because they know the lawsuits are coming.
And because most of the time, the legal liability for enterprise software misuse, misappropriation or compliance violations rests not with the vendor who sold the product, but instead, with their customers. Caveat emptor, y’all.
Ethics in HR Tech: A Vendor Litmus Test
Want to know if an AI vendor is actually ethical? Don’t look at their marketing. Instead, look for:
- Their model documentation: Do they publish test results or bias audits?
- Their customer list: Are they working with companies that have a track record of discrimination or wage theft?
- Their leadership team: Did their last startup implode in a swirl of NDAs and unpaid invoices?
- Their investor pressure: Are they beholden to private equity timelines that prioritize exit velocity over responsible deployment?
Odds are, if a vendor’s idea of ethics is limited to a landing page and a few DEI buzzwords, they’re not ethical — unless you consider a relentless pursuit of a liquidity event or maximizing shareholder value and/or revenues as “ethical” (with a positive connotation, anyways).
And if you do, chances are, you probably have a pretty big seat at the cap table.
The Future of HR: The AI Ethic Cleansing Imperative
The next time you see “ethical AI” in a press release, ask yourself one question: what are they trying to hide?
Because in HR tech, the most ethical thing you can do is stop pretending that ethics can be outsourced, automated, or embedded in an algorithm. They can’t.
Real ethics require real work — transparency, accountability, governance, and oversight. If your vendor can’t deliver those, it doesn’t matter how many times they say the word “ethical.”
After all, Enron once had a code of ethics too.

Pingback: Swagger Like Us: Why the Future of TA Belongs To Talent Intelligence | Snark Attack
Pingback: Man Versus Machine: The Truth About AI In Interviewing | Snark Attack
Pingback: Unplugged: Insights from a Decade of RecFest | Snark Attack