Article image
SEIKOURI Inc.

Nobody Trusts the Hiring Bot

When AI screens the applicant, AI writes the résumé, and nobody trusts the result

Markus Brinsa 8 May 14, 2026 17 17 min read Download Web Insights Edgefiles™ seikou.AI™

Sources

The modern job applicant has a new enemy, and it may or may not exist. That is the first problem.

Somewhere between the applicant tracking system, the automated résumé parser, the ranking model, the AI interview, the recruiter dashboard, the LinkedIn search algorithm, and the company that never replies, millions of job seekers have become convinced that their applications are being quietly eaten by machines. Not rejected by a person. Not weighed against stronger candidates. Not moved to the wrong pile by a tired recruiter on a Tuesday afternoon. Eaten.

The fear is not irrational. It is just impossible to verify most of the time.

A candidate applies for fifty jobs and hears nothing. Another applies for two hundred and receives only auto-generated rejections. A graduate records a one-way video interview into a blank screen, receives generic feedback, and wonders if anyone watched it. A midcareer applicant sees job coaches warning that LinkedIn profiles must be optimized for AI search or they may become invisible. A résumé consultant promises to beat the ATS. A recruiter complains that AI-written applications now look too polished to trust. A hiring platform warns of an “AI doom loop.” Everyone is optimizing. Everyone is filtering. Everyone is suspicious. Nobody knows whether the system is working.

This is where hiring has arrived.

The strange part is that both sides believe they are being outgunned by the other side’s software. Employers say they are drowning in AI-generated applications that look perfect but reveal little. Applicants say employers are hiding behind automated systems that reject them before a human being has seen their name. Recruiters add more AI to cope with application volume. Applicants add more AI to survive the filters. Then recruiters add more filters because the applications are too optimized. Then applicants optimize harder.

This is not a labor market. It is a machine-learning food fight with benefits paperwork attached.

The new hiring process feels less like selection and more like processing

The most visible recent backlash is around AI interviews. These are not always science-fiction conversations with a charming robot in a blazer. Often they are stranger and colder: a candidate sits alone, stares into a webcam, receives timed prompts, records answers, and tries to perform professional competence for a system that may be scoring, transcribing, summarizing, or simply packaging the material for later review.

The experience is frequently described in emotional terms because hiring is emotional. Losing work is emotional. Looking for work is emotional. Trying to prove your value to a blank screen while a countdown timer eats your answer is emotional.

Recent reporting on job seekers in the United Kingdom captured the problem with unusual clarity. Candidates described AI interviews as awkward, humiliating, unnatural, and especially difficult for people who do not perform well under rigid timing and one-way interaction. One candidate with autism described speaking in bullet points and keywords because the format punished the way he would normally think through a problem. That is not a small design flaw. That is a warning sign. If the hiring process pressures people to imitate machine-readable competence instead of demonstrating human competence, the employer may not be measuring the candidate. It may be measuring the candidate’s ability to survive the interface. That distinction matters.

A good interview is not only a test. It is a mutual evaluation. The employer studies the candidate, but the candidate also studies the employer.

Is this place organized? Are the people serious? Do they understand the role? Can they answer questions? Would I trust them with my time, my work, my reputation, my mortgage?

The AI interview breaks that social contract. It asks the applicant to behave as if the employer is present while the employer minimizes its own presence. The candidate must be polished, prepared, responsive, and vulnerable. The company can be absent, vague, automated, and silent. That may be efficient, but it is not neutral. It tells the applicant something about the institution before the applicant ever gets the job.

The defenders of automated interviews will say the volume is too high. They are not wrong. Hiring teams are often overwhelmed. Some roles attract hundreds or thousands of applications. Recruiters cannot manually review everything with the care applicants imagine. But the fact that a human process is broken does not mean the automated replacement is fair, humane, or accurate. Sometimes automation does not solve a bad system. It gives the bad system more throughput.

That is the real danger in HR automation. It can scale the defect.

Applicants now believe invisibility is a technical problem

For years, job seekers were told to network, tailor their résumé, write a better cover letter, and follow up. Now they are told something more paranoid and more technical: optimize for the machine.

Use the right keywords. Mirror the job description. Rewrite your LinkedIn headline. Feed the posting into an AI tool. Generate an ATS-friendly résumé. Rewrite your About section. Improve semantic fit. Align with recruiter search. Make yourself discoverable. Make yourself machine-legible.

Some of that advice is practical. A vague résumé is a bad résumé. A LinkedIn profile that says “strategic leader driving innovation” without naming actual skills, sectors, outcomes, or evidence is not helping anyone. Clear language matters. Specificity matters. If AI tools help candidates explain real experience more clearly, that is not cheating. That is editing.

But the line between clarity and costume is getting thinner.

When applicants believe a human will read their materials, they have an incentive to sound credible. When applicants believe a machine will screen their materials, they have an incentive to sound matched. Those are not the same thing. A credible résumé describes what happened. A matched résumé describes what the system appears to reward. Once the applicant’s goal shifts from truthful representation to machine survival, the résumé becomes less like a record and more like an adversarial prompt.

This is where the old phrase “garbage in, garbage out” still matters, even if everyone has tried to dress it in newer language.

If a candidate feeds vague, inflated, or incomplete information into an AI profile optimizer, the output may become more fluent without becoming more true. If a résumé tool is asked to make an ordinary work history look perfectly aligned with a role, it may smooth away the gaps, exaggerate the relevance, and convert weak evidence into confident language. If a LinkedIn profile assistant generates a polished About section from thin profile data, the result may be more searchable, more professional, and less accurate.

That is not a hallucination in the dramatic chatbot sense. It is subtler. It is résumé inflation by autocomplete. It is the career version of turning tap water into “artisanal mineral-forward hydration.”

The applicant may not think they are lying. The tool may not intend to lie. The platform may describe the feature as a writing assistant. But the market pressure is obvious. If everyone believes the profile must perform for AI, the profile starts becoming less a biography than a search-engine artifact.

And once every applicant sounds like a high-performing, cross-functional, data-driven, stakeholder-oriented transformation leader, the phrase no longer distinguishes anyone. It becomes HR fog.

The LinkedIn problem is bigger than profile polish

LinkedIn sits at the center of this anxiety because it is both a professional identity platform and a hiring infrastructure platform. That dual role creates a strange feedback loop.

On one side, LinkedIn offers AI-powered writing assistance for selected profile sections, available to some Premium subscribers. The company says these suggestions are personalized based on the member’s profile information and informed by analysis of millions of profiles. On the other side, LinkedIn Talent Solutions is adding AI features for recruiters, including tools that analyze candidate profiles and past interactions, generate follow-ups, collaborate with hiring managers, and incorporate applicants into AI-supported evaluations.

In plain English, the platform is helping candidates present themselves with AI while also helping recruiters evaluate candidates with AI.

That does not automatically mean something sinister is happening. It does mean the incentives deserve scrutiny.

If an AI tool improves a profile by making it clearer, more specific, and more faithful to the person’s actual experience, that is useful. If it improves the profile by making the person sound like a better match than the underlying facts support, that is dangerous. The difference cannot be solved by prettier language. It has to be solved by evidence.

The conspiracy theory version says the platform will only make users more visible if they let the AI stretch the truth. That claim should not be stated as fact without proof. But the anxiety behind it is understandable. When professional visibility depends on opaque ranking, matching, and recommendation systems, people start guessing what the system wants. When people start guessing, an optimization industry appears. When an optimization industry appears, truth becomes one variable among many.

This is already visible in search engine optimization, online dating, social media, and e-commerce. Once a platform becomes the gatekeeper, users begin shaping themselves for the gate. Hiring is now going through the same transformation, except the stakes are not clicks, likes, or dinner plans. The stakes are rent, health insurance, visa status, career progression, and dignity.

That is why the “profile optimizer” conversation is not just about better writing. It is about whether professional identity is becoming platform-compatible performance.

Employers are not exactly innocent victims

Employers often describe the flood of AI-generated applications as if it descended from the sky, like weather. It did not. Companies helped create the conditions for it.

They made applications longer. They demanded customized résumés. They required candidates to retype the same information already contained in uploaded documents. They posted vague jobs. They left applicants without replies. They used automated rejection emails that explain nothing. They created multi-stage processes for roles that barely justify one stage. They normalized ghost jobs, evergreen postings, and talent pipelines that may or may not correspond to real openings. They turned hiring into unpaid administrative labor for applicants.

Now applicants are using automation to reduce the burden. This should surprise no one.

If a company makes a candidate upload a résumé and then manually re-enter the entire résumé into a portal, the company has already announced that efficiency only matters on the employer’s side. If a candidate uses AI to fill out the portal faster, that is not the collapse of civilization. That is a predictable response to a bad process.

The problem is that bad incentives compound. Applicants who use AI responsibly are trying to save time and clarify their fit. Applicants who use AI aggressively may mass-apply to hundreds of roles, generate tailored cover letters they barely read, and submit materials that overstate alignment. Employers, seeing a flood of polished but unreliable applications, become more dependent on screening tools. The screening tools become stricter or more abstract. Serious candidates get lost in the noise. Desperate candidates optimize harder. The system becomes less human and less trustworthy at the same time.

That is the AI doom loop in hiring.

It is not simply that machines are replacing recruiters or applicants. It is that both sides are using machines to defend themselves from the consequences of the other side’s machines.

The scary part is opacity, not just bias

Bias remains one of the central risks in AI hiring, but it is not the only one. In some ways, the deeper fear is opacity.

A biased human interviewer can be challenged, at least in theory. A biased process can be documented, at least if someone knows where to look. But an automated hiring system can distribute responsibility across the employer, the vendor, the model, the training data, the configuration, the recruiter settings, the ranking logic, the résumé parser, and the dashboard. When something goes wrong, everyone can point somewhere else. The applicant sees only the rejection.

That is what makes recent litigation and regulation so important. One of the clearest legal warnings is the lawsuit against Workday, the HR software company whose tools are used by many employers to manage recruiting and applicant screening. The plaintiff, a Black man over 40 with anxiety and depression, alleged that Workday’s algorithmic screening tools helped employers reject him from more than 100 jobs and discriminated against applicants based on race, age, and disability. Workday denied that it makes hiring decisions and argued that employers, not Workday, control how its tools are used. But a federal judge allowed key parts of the case to proceed, finding that Workday could potentially be treated as an agent of employers under federal anti-discrimination law.

That matters because it attacks one of the favorite hiding places in AI hiring: the idea that the vendor only provides the tool, while the employer makes the decision.

If courts are willing to examine whether the software provider helped shape the employment outcome, then responsibility may not stop neatly at the employer’s HR department. It may extend deeper into the technology supply chain.

The case does not prove that every AI hiring tool discriminates. It does not prove that Workday’s products did what the plaintiff alleges. But it does show that courts are increasingly unwilling to treat automated screening as a magical liability shield. If a system helps rank, filter, recommend, or reject candidates, the fact that a human technically remains somewhere in the workflow may not be enough.

California has also moved aggressively. Its employment regulations addressing automated decision systems took effect in October 2025 and clarify how existing anti-discrimination rules apply when employers use tools that screen, rank, score, recommend, or otherwise assist employment decisions. The crucial point is that “assist” matters. A company cannot necessarily escape responsibility by saying the human made the final decision if the human decision was shaped by an automated ranking or recommendation system.

That is where many organizations misunderstand AI governance. They imagine liability begins only when the machine makes the final call.

But in real hiring workflows, the final call may be the least interesting part. The decisive moment may occur earlier, when a tool ranks candidates, filters the list, normalizes grades, flags gaps, scores sentiment, extracts keywords, summarizes interviews, or decides which profiles are worth surfacing.

A human can remain “in the loop” while the loop itself has already been narrowed by the machine. That is the governance problem.

The medical residency case shows why applicants are losing their minds

One of the most revealing recent stories did not come from an ordinary corporate hiring process. It came from medical residency applications, where the stakes are enormous and the applicants are highly credentialed.

A Dartmouth medical student, after receiving a wave of rejections, began to suspect that an AI-supported screening tool used by residency programs may have contributed to his lack of interviews. His concern centered partly on how his application described medical leaves of absence. The wording suggested “personal reasons,” while the underlying reality involved a serious autoimmune disease. He feared that ambiguous language might be interpreted negatively by automated or AI-supported tools.

The reporting on the case is important precisely because it did not resolve into a simple villain story. The company behind the tool said it did not use AI to score, rank, exclude, or filter applicants in the way the student feared. Yet the article also described reported grade-display inaccuracies, institutional confusion, program-level variation, and broader applicant anxiety. About 1,500 residency programs, or roughly 30 percent, reportedly used the tool during the 2025–2026 cycle.

That is exactly the kind of story that fuels public distrust.

Even if the applicant’s worst suspicion is not confirmed, the system is opaque enough that a highly educated candidate with medical training, technical ability, and enormous persistence spent months trying to understand whether a machine had quietly damaged his future. Most applicants cannot do that. They cannot reverse-engineer tools. They cannot file sophisticated data requests. They cannot run tests. They cannot contact every program. They cannot distinguish a human rejection from an automated ranking effect from a résumé parsing error from a recruiter preference from a market downturn.

They just disappear into the portal. And then people wonder why job seekers become paranoid.

AI interviews may punish the wrong people

One of the most serious concerns in AI hiring is not that it will always reject the least qualified candidates. The more subtle concern is that it may reject, discourage, or distort candidates who do not perform well under machine-shaped conditions.

A one-way timed video interview may favor people who can speak fluently into a camera without feedback. That is not the same as job competence. An automated résumé parser may favor conventional career paths and keyword alignment. That is not the same as potential. A profile-matching tool may favor people whose experience maps neatly onto known categories. That is not the same as judgment. A sentiment or communication analysis system may misread disability, accent, age, neurodivergence, cultural background, or simply personality. That is not the same as risk.

The cruel joke is that many companies claim they want diverse talent, unconventional thinkers, resilient operators, career changers, and people who bring different perspectives. Then they deploy hiring systems optimized for standardized signals, conventional patterns, and smooth machine readability.

That contradiction should make executives uncomfortable. If your process cannot recognize the candidate unless the candidate has been translated into the preferred dialect of your software, you are not modernizing hiring. You are narrowing it.

The résumé arms race destroys the signal

Hiring depends on signals. A résumé is a signal. A profile is a signal. A degree is a signal. A portfolio is a signal. A recommendation is a signal. An interview is a signal. None of them are perfect, but together they help employers make decisions under uncertainty.

AI is now weakening several of those signals at once.

The résumé can be generated. The cover letter can be customized instantly. The LinkedIn summary can be polished by a writing assistant. The interview answer can be rehearsed with a chatbot. The work sample may be AI-assisted or AI-created. The applicant’s outreach message can be personalized at scale. The recruiter’s reply can also be generated. The rejection email can be automated. The feedback can be generic. The entire exchange can become a performance of attention without much attention inside it.

This does not mean every AI-assisted application is dishonest. That would be too simple and unfair. Many candidates use AI because the process has become exhausting, repetitive, and opaque. Many use it to improve language, not fabricate experience. Many non-native English speakers, neurodivergent applicants, older workers, and career changers may benefit from tools that help them express themselves more clearly.

But when the whole system rewards optimization, the signal begins to rot.

Employers then respond by demanding more proof. Verified skills. Work samples. Assessments. Live interviews. References. Background checks. More stages. More friction. More surveillance. Candidates respond by using more tools to prepare for those stages. Employers respond with more tools to detect tool use. The arms race continues.

At some point, hiring stops being about finding the right person and becomes about detecting whether the person has been too effectively optimized. That is absurd. It is also where the market is heading unless employers change the process.

AI in HR is expanding beyond hiring

Hiring gets the headlines because applicants feel the harm directly. But AI in HR is spreading across the employee lifecycle.

The same kinds of tools can influence internal mobility, promotion, compensation, performance management, productivity monitoring, training recommendations, workforce planning, layoffs, and termination decisions. That is where the issue becomes even more serious. A candidate rejected by an opaque hiring system may never know what happened. An employee managed by opaque HR analytics may gradually find opportunities disappearing, performance ratings shifting, or risk scores accumulating without ever seeing the logic.

The “algorithmic boss” does not need to fire you directly to shape your future. It can decide which opportunities you see, which training you receive, which managers notice you, which teams consider you, which risk category you fall into, and whether your name appears on a shortlist when leadership wants to cut costs.

This is why the phrase “human oversight” needs to be treated with suspicion unless it is defined.

A human who rubber-stamps a machine recommendation is not oversight. A human who cannot understand, challenge, or override the system is not oversight. A human who sees only the dashboard output and not the underlying assumptions is not oversight. That is theater.

Real oversight requires access, explanation, logs, testing, appeal rights, accountability, and the power to stop the system. Most HR departments are not built for that yet.

What responsible companies should understand

The companies that use AI in HR responsibly will not be the ones that simply buy the most advanced tools. They will be the ones that treat hiring as a trust system, not just a throughput problem.

The first question should not be whether AI can reduce recruiter workload. Of course it can. The first question should be what kind of decision the tool is shaping. Screening a résumé for missing required certifications is different from ranking human potential. Scheduling interviews is different from assessing personality. Summarizing an interview transcript is different from scoring communication ability. Suggesting relevant candidates is different from silently excluding candidates. Those distinctions matter legally, operationally, and morally.

The second question should be whether the tool improves the signal or merely processes the noise faster. If AI helps recruiters find qualified candidates they would otherwise miss, good. If it helps candidates explain real experience more clearly, good. If it reduces administrative burden without hiding accountability, good. But if it encourages applicants to inflate, employers to filter blindly, recruiters to disengage, and candidates to distrust the process, then the system is not becoming smarter. It is becoming more automated and less reliable.

The third question should be whether candidates are told what is happening. Transparency will not solve every problem, but secrecy makes every problem worse. Applicants should know when AI is used, what role it plays, whether a human reviews the result, how accommodations are handled, and whether they can request an alternative process. Employers that resist this level of clarity should ask themselves why. If the tool is fair, useful, and defensible, disclosure should not be terrifying.

The fourth question should be whether the employer can explain the decision after the fact. Not with marketing language. Not with “the vendor handles that.” Not with a screenshot of a score. A real explanation. What data was used? What criteria mattered? What was the tool allowed to do? Who reviewed the output? What safeguards existed? What bias testing was performed? What happens if the tool is wrong?

If the answer is not available, the company is not using AI as a hiring tool. It is using AI as a responsibility laundering machine.

The future of hiring cannot be machines performing sincerity at each other

There is a bleak version of the near future in which hiring becomes almost entirely synthetic at the front end.

The job description is AI-generated. The résumé is AI-optimized. The cover letter is AI-written. The profile is AI-polished. The recruiter search is AI-assisted. The first interview is AI-mediated. The transcript is AI-summarized. The ranking is AI-influenced. The rejection is AI-generated. The candidate asks another AI what went wrong. The answer is confident, plausible, and probably useless.

Everyone saves time. Nobody trusts the outcome. That is the future serious companies should avoid.

The point is not to remove AI from hiring. That is unrealistic and unnecessary. The point is to stop pretending that automation is automatically fairness, efficiency, or intelligence. Hiring is not a simple matching problem. It is a high-stakes judgment process under uncertainty, involving identity, opportunity, money, status, law, and human dignity. The more AI enters that process, the more governance it requires.

The old hiring system was already flawed. It favored networks, polish, credentials, conventional paths, and people who knew how to play the game. AI did not create those problems. But AI can scale them, obscure them, and make them feel mathematically legitimate. That is worse than ordinary unfairness because it arrives wearing the costume of objectivity.

Applicants are not wrong to be anxious. Employers are not wrong to be overwhelmed. Recruiters are not wrong that the application flood is real. Platforms are not wrong that AI can improve parts of the process. But everyone is wrong if they think more automation alone will restore trust.

Trust will come from evidence. It will come from clear rules, honest disclosure, constrained use, bias testing, human appeal, and a hiring process that does not treat people as data exhaust until the final round.

The labor market does not need more machines pretending to understand humans. It needs humans willing to remain accountable for the machines they use.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.