Home > Articles > What CHROs Should Actually Ask AI Hiring Vendors About Compliance

What CHROs Should Actually Ask AI Hiring Vendors About Compliance

What CHROs Should Actually Ask AI Hiring Vendors About Compliance

Most talent leaders I talk to know the AI in their hiring stack is a risk. What they don’t know is what to ask about it.

The space has filled up with acronyms in the last eighteen months. ISO 42001. NYC Local Law 144. Colorado SB 24-205. Illinois HB 3773. EU AI Act. Four-fifths rule. Each one sounds like the answer. None of them, alone, is.

This is a guide for non-technical buyers. The goal is to give you enough to walk into a vendor conversation and know whether the answers you’re getting are real or theatrical.

The Two Things That Often Get Confused

The two pieces of paperwork that come up most often in vendor pitches are ISO/IEC 42001 certification and an independent bias audit. They sound similar. They are not the same, and they prove different things.

ISO/IEC 42001 is the first international AI management system standard, published in 2023. When a vendor says they’re ISO 42001 certified, it means an accredited third-party body has verified that they have governance practices in place: documented risk assessments, change controls, human oversight policies, vendor management for any foundation models they use. It’s the AI equivalent of ISO 27001 for security. Microsoft, AWS, and a small number of SaaS vendors have it. It’s becoming a procurement gate at large enterprises, especially in Europe.

What ISO 42001 does not tell you is whether the specific tool you are buying produces fair outcomes for candidates. It is a process certification, not an outcome certification.

An independent bias audit is the outcome side. An external auditor calculates selection rates by protected class and applies the EEOC’s four-fifths rule, which says any group selected at less than eighty percent of the rate of the most-favored group is a disparate impact warning. NYC Local Law 144 has required these annually since 2023 for any automated employment decision tool used on NYC candidates. Colorado and Illinois are now adding their own variations.

The audit ecosystem has matured quickly. Among the names worth knowing are Warden AI, BABL AI, Holistic AI, and ORCAA. They take different approaches. Warden AI focuses on HR/TA AI assurance, with continuous, technology-driven bias auditing and compliance evidence for HR tech vendors and enterprise HR teams. BABL AI provides AI audits, NYC bias audits, auditor training, and ISO/IEC 42001 assurance and certification-readiness work. Holistic AI offers a broader AI governance platform with bias assessment and audit-support capabilities, though some use cases may still require separate independent auditor signoff. ORCAA, founded by Cathy O’Neil, author of Weapons of Math Destruction, takes a more consultative, accountability- and social-impact-oriented approach to algorithmic risk. The right partner depends on what the vendor needs to demonstrate, under which regulatory or buyer standard, and to whom.

A bias audit tells you what the tool actually did. ISO 42001 tells you the vendor knows how to run an AI program. You need both. One without the other is a partial picture.

The Other Pieces

A complete answer to “is this vendor defensible” usually involves four more things.

  1. SOC 2 Type II for data security. This is table stakes and has nothing to do with AI specifically, but if a vendor doesn’t have it, the conversation stops there.
  2. Documented compliance with the jurisdictions where you hire. NYC LL144 if you have NYC candidates. Illinois HB 3773 (effective January 2026). Colorado SB 24-205 (effective June 2026). EU AI Act high-risk obligations if you hire EU residents (effective August 2026). The vendor should be able to tell you what they do for each, not just that they’re “compliant.”
  3. Adverse impact data disaggregated by race, gender, age, and disability. This is the actual underlying number behind the bias audit. Vendors who only show you a single overall pass-fail score are hiding something.
  4. Meaningful human oversight in the workflow. Regulators have been clear that “the algorithm did it” is not a defense. If the tool makes or strongly influences a hiring decision without a human reviewer who has the information and authority to override it, you have a problem regardless of what certifications the vendor holds.

A Vendor Evaluation Checklist for Non-Technical Buyers

Take this into the demo. The goal is not to trip the vendor up. It’s to find out whether they’ve done the work or whether they’re hoping you won’t ask.

  1. Are you ISO/IEC 42001 certified, or do you have a documented roadmap to certification? If neither, what AI governance framework do you use?
  2. When was your last independent bias audit, who conducted it, and can I see the public summary? For US tools, the audit should explicitly reference the four-fifths rule and report selection rates by protected class.
  3. Can you provide adverse impact data broken out by race, gender, age, and disability status for the past twelve months?
  4. Which jurisdictions’ AI hiring laws do you comply with, and what does compliance look like operationally for each? Specifically ask about NYC LL144, Illinois HB 3773, Colorado SB 24-205, and the EU AI Act if relevant.
  5. What candidate notice and consent does your tool require, and who is responsible for delivering it, you or us?
  6. How does a human override work in your tool? Walk me through the screen where a recruiter sees your scoring and makes a different decision.
  7. How long do you retain the input data, the AI outputs, and the audit logs? Most relevant laws now require multi-year retention, and EEOC recordkeeping under Title VII generally calls for at least one year of personnel records, longer if a charge is pending.
  8. Do you have a SOC 2 Type II report?
  9. If we get an EEOC charge involving a candidate your tool screened, what data and documentation will you produce, and how fast?
  10. Have you been named in any employment discrimination matter? What was the outcome?

A vendor who answers these directly is one you can probably work with. A vendor who deflects, says “we’re working on that,” or sends you to a glossy responsible-AI page without specifics is one to walk away from.

What This Doesn’t Solve

None of this immunizes you. The four-fifths rule is a screening test, not a safe harbor. ISO 42001 is governance, not fairness. A bias audit is a snapshot, not a guarantee.

What this stack does is give you a defensible posture. If a charge or a class action arrives, the question regulators and plaintiffs’ counsel ask is not “did your AI work perfectly.” It’s “did you exercise reasonable care, document your decisions, and act when you saw a problem.” Buying tools that can answer those ten questions is how you exercise reasonable care. Buying tools that can’t is how you end up explaining why you didn’t ask.

The tools that are worth buying right now are the ones whose vendors already think this way. They’re getting easier to spot.

If you want help running this evaluation against a specific shortlist, that’s the kind of work we do at IRD. Reach out at sales@integralrecruiting.com.

RELATED POSTS

Recruiter having a meaningful candidate conversation after iCIMS automation reduced administrative workload

The Hidden Payroll of an ATS Switch

When organizations compare ATS platforms, they compare price tags. When they finally run the project, they discover a second budget nobody approved: the one paid

System Admin Insights
Subscribe to our newsletter
Get exclusive access to the full learning opportunity