How to Build an AI Agent Team for Your Recruitment Agency (Step-by-Step)
We analysed 1,643 recruitment agency sales calls. The #1 pain - mentioned 1,847 times - is building sourcing lists manually. Here is the exact process to build an AI Agent Team that solves it overnight.
Building an AI Agent Team for your recruitment agency means creating a set of specialised AI workers in Claude Code - each one owning a specific job that your recruiters don't have time for. This guide walks through the exact steps to go from zero to a working List Builder Agent and BD Outreach Agent, grounded in data from 1,643 real recruitment agency conversations.
Why Recruitment Agencies Are Building Agent Teams, Not Just Using AI Tools
Most recruitment agencies have tried AI by now. A ChatGPT plugin here, a LinkedIn automation tool there. Maybe an n8n workflow for CRM updates. The tools work in isolation - but they don't change how the agency actually operates.
The reason is structural.
Every recruiter at a growing agency is expected to be six specialists simultaneously: sourcer, BD manager, CRM admin, copywriter, account manager, and analyst. They can only be one at a time. The other five jobs fall through the cracks.
This is not a people problem. It is a system problem.
We analysed 1,643 Fathom recordings of sales and discovery calls with recruitment agency owners. 20,592 structured data points. The results were clear:
1,847
times manual list building was mentioned as a pain point across 1,643 sales calls
Source: Automindz VOC Analysis (2026)
The #2 pain - BD stopping the moment delivery starts - came up 1,523 times. In their own words: "Business development efforts are neglected when recruiters are busy with active projects."
That is not a feature request. That is an entire revenue function failing on a regular basis. We covered why automation fails in a separate post - the short answer is that individual tools without a coordinating system do not solve structural problems.
An AI Agent Team does not add a feature to your existing process. It fills the specialist roles that nobody has time for. The List Builder Agent builds candidate lists overnight. The BD Agent keeps finding new targets while your team is heads-down on delivery. The Pre-qual Agent screens candidates before any human time is spent.
The recruiters stay focused on what only humans can do: relationships, negotiation, judgment calls. The agents handle the rest.
What an AI Agent Team Actually Is
An AI Agent Team is not a chatbot. It is not a prompt library. It is a set of Claude Code agents - each defined by a markdown file with a clear role, a specific set of tools, and explicit rules about when to act and when to ask for approval.
Each agent is essentially an SOP for an AI. Instead of training a human employee by walking them through a playbook, you train an agent by writing the playbook as a markdown file. The agent reads it, connects to its tools, and works.
The structure looks like this:
Your Claude Code project (the "harness")
├── CLAUDE.md -- system prompt, read before every message
├── .claude/
│ └── agents/
│ ├── list-builder.md -- the sourcing agent
│ ├── bd-outreach.md -- the business development agent
│ └── orchestrator.md -- coordinates the team, sends daily digest
└── client/
├── config.md -- your ICP, niche, brand voice (changes per deployment)
└── state/ -- files agents read and write to share context
The key insight: agents communicate through files, not through each other's memory. They wake up, read the current state, do their job, write the output, and notify you via Telegram. Nothing sends without your approval.
“A recruiter with a ClaudeBrain is faster. A recruiter with an AI Agent Team is leveraged. The difference is whether AI is doing tasks you ask for - or running the functions you never have time for.”
What You Need Before You Start
Claude Code setup:
- Claude Pro subscription ($20/month minimum - Max plan at $100-200/month for heavier use)
- Claude Code installed (run
npm install -g @anthropic-ai/claude-codein terminal) - VS Code or any editor with a terminal
API keys (stored in your .env file):
- Anthropic API key (from console.anthropic.com)
- LinkedIn data access (RapidAPI LinkedIn MCP - free tier available)
- Email outreach tool: Instantly or Lemlist
- Contact enrichment: Prospeo for email finding (trial available)
Time: Budget 2-3 hours for setup and your first agent. The second agent takes 1-2 hours once you understand the pattern.
You do not need to know how to code. Agent files are written in plain English. If you can write a job spec or an onboarding SOP, you can write an agent.
Step 1: Set Up Your Harness and CLAUDE.md
The harness is a Claude Code project folder that becomes your agency's AI operating system. Every agent lives inside it. Every state file gets written to it.
Start by creating a new folder and opening it in VS Code:
mkdir my-agency-harness
cd my-agency-harness
code .
Open Claude Code (Cmd+Shift+P, type "Claude Code") and run:
/init
This command scans your project and generates a CLAUDE.md file automatically. It is a starting point - you will customise it.
What CLAUDE.md is: A markdown file read before every single message you or an agent sends. Think of it as the permanent context your AI team carries. Keep it lean - large CLAUDE.md files burn tokens on every interaction.
Three layers every CLAUDE.md needs:
WHAT - your project structure, key files, available tools, agent roster WHY - the purpose of each component and what breaks if it fails HOW - behaviour rules: always read client/config.md first, always check pending approvals before acting, never send outreach without approval in pending-approvals.json
Here is the rule that matters most: add this line to your CLAUDE.md:
On any uncertainty - write to client/state/pending-approvals.json, notify Telegram, and STOP.
This is your safety net. No agent acts on ambiguity. It asks.
Now create your client config file at client/config.md. This is the only file that changes when you deploy for a different role or niche:
## Agency Profile
Niche: [e.g. "Life science recruitment, UK permanent roles, £50-150K band"]
## ICP - Candidates We Place
Target titles: [e.g. "Senior Scientist, Principal Scientist, Research Scientist"]
Seniority: [e.g. "Mid to senior, 5+ years experience"]
Location: UK (willing to consider remote-first roles for right candidates)
## Brand Voice
Tone: Direct and professional. Reference specific role context. No generic templates.
Fill this in for your niche before building any agents. Every agent reads it.
Step 2: Build Your First Agent - The List Builder
10-50 hrs
spent per project on manual candidate list building - what agencies told us in their own words
Source: Automindz VOC Analysis (2026)
The List Builder is the right first agent to build. The data is unambiguous: 1,847 mentions of manual list building as a pain across the agencies we spoke to. The ROI is immediate - overnight sourcing lists in exchange for a few hours of setup.
Create a file at .claude/agents/list-builder.md:
---
name: list-builder
description: Builds qualified candidate lists for open roles. Use when a new role brief arrives or when a sourcing list is needed. Returns a scored, enriched longlist ready for human review. Do not proceed to outreach without approval.
model: claude-sonnet-4-5
tools:
- Bash
- Read
- Write
---
# List Builder Agent
Always read client/config.md first. Always read client/state/active-roles.json.
You are the List Builder. Your job is to build a qualified, enriched candidate longlist
for a given role. You do NOT send any outreach. You produce the list and wait for approval.
## Process
1. Read the role brief from client/state/active-roles.json
2. Search LinkedIn via RapidAPI for matching candidates using ICP criteria from config.md
3. Enrich each candidate: find email via Prospeo, verify current role via LinkedIn profile
4. Score each candidate against role criteria on a scale of 0-10 with reasoning
5. Deduplicate against client/state/crm-contacts.json (skip anyone already in CRM)
6. Write output to client/state/approved-lists/[role-name]-[date]-draft.json
7. Send Telegram notification: "List Builder complete. [N] candidates for [role]. Review at [file path]."
## Human Gate
After writing the draft list - STOP. Do not contact any candidate.
Write to client/state/pending-approvals.json.
Notify Telegram and wait for approval.
## Escalation
Fewer than 30 candidates found - alert the recruiter with suggested criteria adjustments.
Any candidate marked as a previous client contact - exclude and flag separately.
Three things to notice about this file:
The description field is how the Orchestrator (and Claude itself) decides when to invoke this agent. Write it like a tool description, not a job title. Clear descriptions mean accurate invocation.
The agent reads client/config.md first - this is how it knows it is searching for life science candidates in the UK rather than fintech candidates in New York. One agent file, any niche.
The human gate is explicit and hard. The agent writes the list, notifies you, and stops. It does not decide your candidates are good enough to contact. That call stays with you.
Step 3: Connect Your Tools via MCP
MCP (Model Context Protocol) is how Claude Code connects to external tools. Each connection adds a new capability to your agents without any code - just a config entry and an API key.
Add your tool connections to your Claude Code settings (Cmd+Shift+P, "Open Claude Code Settings"):
{
"mcpServers": {
"linkedin": {
"command": "npx",
"args": ["@rapidapi/linkedin-mcp"],
"env": {
"RAPIDAPI_KEY": "${RAPIDAPI_KEY}"
}
},
"lemlist": {
"command": "npx",
"args": ["@lemlist/mcp-server"],
"env": {
"LEMLIST_API_KEY": "${LEMLIST_API_KEY}"
}
}
}
}
Store every API key in a .env file in your project root - never in the config itself. Claude Code reads .env automatically. Your keys never appear in conversation history.
For each connected tool, create a short cheat sheet at client/knowledge/mcp-cheatsheets/[tool].md. This is a plain English guide Claude reads to understand which tool to use when:
## LinkedIn MCP Tools
Use Search_People when: finding candidates matching criteria
Use Enrich_Profile_Data when: you have a LinkedIn URL and need full profile data
Use Search_Companies when: finding companies matching ICP for BD
Do not use: direct message tools - outreach goes through Instantly or Lemlist only
This pattern - a readable guide alongside the MCP connection - dramatically improves how accurately agents use their tools.
Step 4: Set Up Human Approval Gates
The most important part of any AI Agent Team is not how fast it works. It is how reliably it stops at the right moments.
Every high-stakes action in your harness routes through a pending approvals file and a Telegram notification. The pattern is the same across every agent:
Agent completes work requiring a decision
→ Writes to client/state/pending-approvals.json
→ Sends Telegram: "List Builder | [Role Name]
[N] candidates found. Top 3: [names + scores].
Full list at client/state/approved-lists/[file].
Reply: approve / reject / edit [instructions]"
→ STOPS
Your Telegram reply → updates pending-approvals.json
Next agent run sees approval and proceeds
To set up Telegram notifications, your agents call mav-tools (or any Telegram bot wrapper) via bash. Add your Telegram bot token and chat ID to .env:
TELEGRAM_BOT_TOKEN=your_bot_token
TELEGRAM_CHAT_ID=your_chat_id
Agents send notifications with:
mav-tools notify "List Builder complete - 47 candidates for Senior Scientist role. Reply to approve."
This gate is not optional. It is the mechanism that makes an AI Agent Team trustworthy enough to actually use. Before any list goes to outreach, you see it. Before any sequence activates, you approve the template. The agent does the work. You make the decisions.
7
times price was mentioned as an objection across 1,643 agency sales calls - internal buy-in (178 mentions) is the real barrier
Source: Automindz VOC Analysis (2026)
The approval gate is also what gets internal buy-in. Nobody on your team is worried about AI sending unreviewed messages to candidates when they can see that every list, every sequence, every brief requires a human decision before it moves.
Step 5: Schedule and Run Your First Agent
With your harness set up, config filled in, and List Builder agent configured, add a role brief to kick things off:
Create client/state/active-roles.json:
[
{
"id": "role-001",
"title": "Senior Scientist - Protein Engineering",
"client": "BioTech Client A",
"brief": "Looking for a PhD-level protein engineer with 5+ years industry experience, UK-based, open to hybrid working. Must have experience with structural biology and CRISPR applications.",
"deadline": "2026-04-10",
"priority": "high"
}
]
Now trigger the List Builder directly from Claude Code:
Run the list-builder agent for role-001
Claude Code invokes the agent, which reads the role brief, searches LinkedIn, enriches contacts, scores candidates, and sends you a Telegram message when the draft list is ready. You wake up to a scored, enriched candidate list for your Senior Scientist role - built overnight while you were doing something else.
That is the first proof of concept. From here, the system compounds.
What to Build Next: The Full Agent Team
1,523
mentions of BD stopping when delivery starts - the #2 pain across 1,643 agency sales calls
Source: Automindz VOC Analysis (2026)
The List Builder addresses the #1 pain. Once it is running reliably, build the BD Outreach Agent to address #2.
The BD Agent monitors job boards and LinkedIn for companies showing hiring signals in your niche - the same signal-based approach we have documented across 40+ agency deployments. It enriches decision-makers, drafts personalised sequences referencing the specific signal, and queues them for your approval before anything sends via Instantly or Lemlist.
The result: your BD pipeline keeps moving even when your team is fully occupied on delivery.
Once you have two agents running, add the Orchestrator. This agent runs on a schedule (every morning at 6am), reads your current pipeline state, decides which agents need to run today, spawns them in parallel, and sends you a combined digest. You go from manually triggering each agent to receiving a single morning Telegram message: "Orchestrator complete. 3 new BD targets flagged. List for [role] ready for review."
The full minimum viable team:
- Week 2: List Builder live
- Week 3: BD Outreach Agent live
- Week 4: Orchestrator coordinating both, approval gates wired
From there, the next additions build on the same structure: a Pre-qual Agent for candidate screening, a Brief Writer for polished write-ups, a Data Steward to keep your CRM clean overnight.
Each agent is the same format - a markdown file with YAML, instructions, and a human gate. The complexity does not stack. The value does.
The Difference This Makes in Practice
Here is what changes when a recruitment agency moves from manual operations to a running AI Agent Team:
Before: A consultant at a 5-person life science agency spends Monday morning building a candidate list. 6 hours. 47 LinkedIn searches. 12 profiles enriched by hand. Delivery Tuesday afternoon is now pushed to Wednesday.
After: The List Builder ran Sunday night. 63 scored candidates in the approved-lists folder at 7am Monday. The consultant reviews the list in 20 minutes, approves 40, and spends Monday morning calling the top 10. First candidates submitted by end of day Monday.
That is not a marginal efficiency gain. That is a different way of running the desk.
For our own analysis - running 1,643 sales call transcripts through an extraction pipeline and clustering 20,592 data points via parallel Claude Sonnet API calls - the infrastructure cost was $1.94 and the processing time was 40 seconds. The same approach that revealed the list building stat is the same approach your agency can use to understand your own data.
The barrier to entry is a Claude Pro subscription and a few hours of configuration. The barrier to not building this is continuing to spend 10-50 hours per project on work that a properly configured agent does overnight.
Getting Started: The AcademyOS Path
If you want to build this yourself with guidance, our AcademyOS program walks you through exactly this process over 6 weeks. You leave with a live, deployed AI Agent Team running in your own Claude Code environment - not a homework exercise, not a demo, your actual recruitment operation.
Week 1 covers harness setup and CLAUDE.md. Week 2 is the List Builder, running against your real open roles. Week 3 is the BD Agent. By Week 4, you have both agents coordinated through an Orchestrator sending you a daily digest.
The program is built for recruitment agency owners who are tech-curious but not technical. If you can follow instructions and write an SOP, you can build this. The existing Claude Code overview post covers the broader landscape if you want context before diving into the build.
If you want us to build it for you instead, our done-for-you service deploys the full agent team for your agency in 14 days. Either way, the harness is the same - and the data behind it is the same 20,592 data points that tell us where the time is really going.
The next step is opening Claude Code, running /init, and writing your first CLAUDE.md. Everything else follows from that.
Frequently Asked Questions
Related Articles
Written by

Niklas Huetzen
CEO & Co-Founder
Niklas leads Automindz Solutions, helping recruitment agencies across the globe build AI-powered pipeline systems that deliver warm meetings on autopilot.
Connect on LinkedIn →Free Resources
Want more like this?
Our Resource Hub is packed with free guides, templates, and tools to help you build AI-powered recruitment pipelines.