Home > Articles > Governance & Change Control: How ATS Changes Actually Happen

Governance & Change Control: How ATS Changes Actually Happen

Governance & Change Control: How ATS Changes Actually Happen

 

When someone wants to change a workflow, modify a field, update an approval template, or reconfigure something in your ATS, what actually happens?

Here’s the worst-case scenario: A recruiter sends an urgent Slack message. A VP emails the vendor directly. Someone submits a ticket without context. Changes get made in production. Nobody documents anything. And three weeks later, workflows are broken and nobody remembers why.

This isn’t malicious. It’s what happens when you don’t have governance and change control.

The question isn’t whether people will request changes – they will. The question is whether you have a system to evaluate, prioritize, and implement those changes without breaking everything else.

Not sure where you stand? Take our ATS Maturity Assessment to see how your workflow design compares to industry benchmarks.


The Foundational Tier

What this looks like:

Changes happen in response to whoever is loudest or most urgent. There’s no intake process, no prioritization framework, and requests come through Slack, email, hallway conversations, and direct messages to the vendor. The person handling system administration (usually someone doing this as a side job) is just trying to keep up with the flood of requests.

That person is:

  • Getting pinged on Slack about “urgent” changes that need to happen “today”
  • Submitting tickets to the vendor without full context because they don’t have time to investigate
  • Making changes directly in production because there’s no test environment
  • Saying yes to everything because they don’t have the authority to push back
  • Working late to fix things that broke because of a change nobody thought through

Meanwhile, the people making requests are operating in very limited context. They see a problem in their workflow and they know what they want fixed – but they have no visibility into the downstream implications. Nor should they. That’s not their job.

What’s actually happening:

Ad hoc changes are snowballing into bigger problems. Someone requests a field change without realizing it will break an integration. Leadership demands an urgent workflow update without explaining why, so the system admin implements it without understanding the business context. A “quick fix” for one recruiter creates confusion for 50 others.

The person doing system administration doesn’t have the bandwidth to contextualize these requests, let alone evaluate whether they’re good ideas. So the most urgent requests get handled first, regardless of whether they’re actually important.

This creates massive frustration. Teams start wondering why they bought this expensive, complicated system in the first place when it keeps breaking.

What to do about it:

1. Create a basic triage system

Stop accepting change requests through Slack and email. Create a single intake point – it can be as simple as a Google Form or Monday.com board – where people submit requests with basic information:

  • What’s not working and why it matters
  • Who is impacted by this issue
  • How urgent is this really (not everything can be “urgent”)
  • What happens if we don’t make this change

This gives you a log of what’s been requested and creates a moment of friction that helps people think through whether their request is actually necessary.

2. Improve how you communicate with support

When you submit a ticket or request to your vendor, don’t just say “we need to change X field.” Explain the problem first:

  • What’s not working and why it matters
  • Screenshots or screen recordings showing the issue
  • What you’ve already tried
  • What business process this is impacting

This context helps support understand what you’re actually trying to solve, not just what you’re asking for. Sometimes there’s a better solution than what you requested.

Quick win: Before submitting your next change request, record a 60-second video using Loom or similar software showing the problem and explaining why it matters. This single habit will dramatically improve how quickly and accurately your requests get handled.


The Functional Tier

What this looks like:

You have someone officially responsible for system administration, and there’s a formal intake process. Requests come through a shared system where they get logged and tracked. But there’s still no real prioritization framework, and you don’t have a test environment.

Changes still happen mostly in response to urgency rather than importance. Your admin tries to evaluate requests, but they’re juggling too many other responsibilities to do proper due diligence. And because changes are made directly in production, there’s always risk.

What’s actually happening:

You’re getting better at the mechanics of change (documenting requests, tracking what changed) but you’re still reactive. The loudest voice or the highest-ranking person tends to get their requests prioritized, not necessarily the changes that would have the most impact.

Your system admin is probably one of your smartest people – technical, thoughtful, good at problem-solving. But they lack the authority to push back on bad requests or the bandwidth to educate stakeholders about why certain changes don’t make sense.

They’re also likely making changes without fully understanding the business context. A VP says “we need this changed urgently” but doesn’t explain why, so your admin implements it literally rather than solving the underlying problem.

This leads to:

  • Changes that solve one team’s problem but create problems for others
  • Requests sitting in the backlog for months because nobody has time to evaluate them
  • Frustration from both the admin (who feels overwhelmed) and stakeholders (who feel ignored)
  • Rework when changes don’t actually solve the problem they were meant to address

What to do about it:

You need two things: structured prioritization and a test environment.

Get a test environment

This is where you need to invest. If you’re making changes directly in production, you’re gambling every time. A test environment (sometimes called a sandbox or demo site) lets you validate changes before they go live.

Yes, it will cost extra depending on your vendor. It’s worth it. The first time you avoid breaking a critical workflow because you tested it first, it will pay for itself.

Implement structured prioritization:

Even if it’s lightweight, you need a framework:

Simple prioritization criteria:

  1. Impact: How many people/processes does this affect?
  2. Effort: How complex is this change? Does it require vendor involvement?
  3. Risk: What could break if we do this? What breaks if we don’t?

High-impact, low-effort, low-risk changes go first. Everything else gets scheduled based on capacity and strategic priorities.

Weekly review cycle: Set aside time each week to triage new requests. Your admin evaluates them against the framework, assigns priority, and communicates timelines back to stakeholders.

Empowerment and authority:

Your system admin needs explicit authority to:

  • Ask clarifying questions before implementing requests
  • Push back on requests that don’t make sense
  • Delay changes that need more investigation
  • Say no to changes that would create more problems than they solve

This authority needs to come from leadership. If your admin is being told to “just do what the VP wants,” you don’t have governance – you have chaos with extra steps.

What’s costing you: Without proper prioritization, you’re probably spending 40% of your admin’s time on changes that don’t meaningfully improve your hiring process, while critical optimizations sit in the backlog indefinitely.


The Optimized Tier

What this looks like:

You have formal governance: structured intake, documented prioritization, regular testing in a dedicated test environment, and release notes that tell everyone what changed and why.

Changes are tracked from request through implementation. Stakeholders know when to expect their requests to be evaluated and implemented. Your system admin has time to investigate the best solution, not just the fastest one.

At this level, you have:

  • A dedicated system administrator with 3-5+ years of experience who’s been through enough bad changes to know what to watch for
  • A test environment where all changes are validated before going live (non-negotiable for companies over 5,000 employees)
  • Regular release cycles (weekly, biweekly, or monthly depending on change volume)
  • Communication channels that keep stakeholders informed without overwhelming them

For global companies (5,000+ employees), this level of governance isn’t optional – it’s impossible to run operations without it.

What’s actually happening:

Your ATS is stable and predictable. People know how to request changes, how those requests will be evaluated, and when to expect implementation. Changes are thoroughly tested before going live, so workflows rarely break unexpectedly.

But you still have two challenges:

Challenge #1: Managing up

Your system admin is deeply technical and wants to be thorough. Leadership wants the bottom line. This creates communication friction.

Your admin starts explaining the technical reasons for a decision and leadership’s eyes glaze over. Or your admin gives too much detail and the conversation goes down rabbit holes without reaching decisions.

The admin thinks they’re demonstrating value by being comprehensive. Leadership thinks they’re being unnecessarily complex.

Challenge #2: Balancing speed and thoroughness

Perfect governance can slow you down. If every change requires a formal review, impact analysis, testing cycle, and release notes, you might be too slow to respond to legitimate urgent needs.

You need fast-track procedures for true emergencies while maintaining governance for everything else.

What to do about it:

Simplify upward communication:

Train your system admin to lead with impact, not mechanics:

❌ “We can’t implement this request because it would require modifying the custom field dependencies in the workflow configuration, and that would impact how data flows through the integration layer…”

✅ “This change would break our background check integration for 2-3 days while we reconfigure it. I’d recommend we wait until our scheduled integration maintenance window next month.”

Get to the point fast. Technical details come later if someone asks.

Document everything, but make it scannable:

Your release notes should have three levels:

  1. Executive summary: What changed and why it matters (2-3 sentences)
  2. User impact: What people need to know or do differently (bullet points)
  3. Technical details: Full documentation for future reference (collapsible section)

Most people read #1 and #2. #3 exists for when something breaks and you need to understand what changed.

Create fast-track procedures:

Define what qualifies as “urgent” (actual business stoppage, compliance risk, or time-sensitive opportunity). True urgent changes can skip the normal queue with VP-level approval and mandatory post-implementation review.

Everything else goes through governance.

Empower regional admins:

If you’re global, your sub-administrators should have authority to make certain regional changes within defined guardrails. Central governance should focus on changes that affect multiple regions or core system architecture.

Advanced strategy: Hold quarterly “technical debt” sprints where your admin gets dedicated time to fix the accumulated problems that never rise to “urgent” but are slowly degrading system performance. This prevents the slow erosion that happens when you only respond to crises.


The Bottom Line

Governance isn’t bureaucracy – it’s protection.

Without it, you’re one urgent request away from breaking something critical. With it, you can move fast without breaking things.

The sophistication of your governance should match your organizational complexity:

  • Below 1,000 employees: Basic intake procedures will go a long way
  • 1,000-5,000 employees: Structured prioritization and release cycles
  • Above 5,000 or global operations: Formal governance with regional delegation

But at every level, the core principle is the same: evaluate before you implement, test before you deploy wherever possible, and document what you did so someone else can understand it later.

Want help building governance that actually works? Book a strategy call or check out our fractional ATS administration services.

Already have governance but struggling with stakeholder communication? That’s a common topic in System Admin Insights – join other experienced admins who’ve solved this problem.

[sc name=”sai-global-cta”]

Frequently Asked Questions

Q: How formal does our governance process need to be?

A: It should match your organizational complexity. A 500-person company might just need a shared intake form and weekly review meetings. A 10,000-person global company needs formal workflows, multiple approval layers, and release management. Start simple and add structure as you grow.

Q: What’s a reasonable timeline for implementing change requests?

A: For routine changes: 2-4 weeks from request to implementation. For complex changes that require vendor involvement or extensive testing: 4-8 weeks. True emergencies (system outages, compliance risks): same day or next day. Set these expectations clearly so stakeholders know what’s realistic.

Q: How do we get a test environment if our vendor doesn’t include it?

A: Ask your vendor what it would cost to add one. Most enterprise ATS platforms offer test/sandbox environments as add-ons. The cost (typically a few thousand dollars annually) is minimal compared to the risk of making untested changes in production. If your vendor doesn’t offer this at any price, that’s a red flag about the maturity of their platform.

Q: What do we do when a VP demands an immediate change that hasn’t gone through governance?

A: You need air cover from senior leadership that governance applies to everyone. The conversation should be: “I can absolutely get this done – let me add it to this week’s priority queue. If this is truly urgent and can’t wait for our normal cycle, I’ll need written approval from [C-level sponsor] to fast-track it.” This usually separates real urgency from impatience.

Q: How do we prioritize when everything feels urgent?

A: Use the impact-effort-risk framework. High-impact + low-effort + low-risk goes first. Low-impact + high-effort gets declined or delayed regardless of who’s asking. For everything in between, your system admin should have authority to make judgment calls based on strategic priorities, with escalation to a steering committee only for major decisions.

Q: Should IT be involved in our ATS governance?

A: Only for changes that impact integrations, security, or infrastructure. IT doesn’t need to approve workflow changes or field modifications. Over-involving IT slows everything down and adds bureaucracy without meaningful risk reduction. Define clear boundaries: IT owns security and integrations, TA owns business process and configuration.

Q: What should be in our release notes?

A: What changed, why it changed, who it impacts, and what (if anything) users need to do differently. Keep it short. “We updated the requisition approval workflow to automatically skip the second approval if the position is under $75K. This affects all hiring managers in the Finance and Operations departments. No action required – your next req will follow the new process.” That’s it.

Q: How do we handle requests that are technically possible but strategically bad ideas?

A: Your system admin needs authority to say “here’s why this would create problems” and propose alternatives. This requires trust from leadership. Document the request, explain the risks, and suggest better solutions. If stakeholders insist anyway (after understanding the risks), implement it but document the decision for future reference. Sometimes people need to learn by experience.

RELATED POSTS

System Admin Insights
Subscribe to our newsletter
Get exclusive access to the full learning opportunity