loading…
Search for a command to run...
loading…
AI-powered resume parser and full Applicant Tracking System with 21 MCP tools. Parse PDFs, DOCX, TXT, Markdown, and URLs into structured JSON; extract skills, e
AI-powered resume parser and full Applicant Tracking System with 21 MCP tools. Parse PDFs, DOCX, TXT, Markdown, and URLs into structured JSON; extract skills, experience, and keywords; score and rank candidates; run a full ATS pipeline covering jobs, candidates, interviews, offers, notes, and analytics. 20 of 21 tools are 100% algorithmic — no API keys required. npx -y mcp-ai-hr-management-toolkit
AI-powered resume parser & full Applicant Tracking System with 21 MCP tools. Parse PDFs, extract skills, detect patterns, score candidates, and manage a complete hiring pipeline — all from your AI assistant, no manual work required.
Live demo: https://ai-hr-management-toolkit.vercel.app
You have 50 resumes to screen. Your AI assistant can reason about candidates — but it cannot open PDFs, extract structured data, or track pipeline stages. This toolkit bridges that gap.
Give your AI assistant 21 tools covering the entire hiring workflow:
20 of 21 tools are 100% algorithmic — no LLM calls, no API keys required. The AI calls tools, interprets the results, and delivers analysis. You just ask questions.
No installation needed. Point your MCP client at the package:
Claude Desktop — Edit %APPDATA%\Claude\claude_desktop_config.json (Windows) or ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"ai-hr-management-toolkit": {
"command": "npx",
"args": ["-y", "mcp-ai-hr-management-toolkit"]
}
}
}
Example usage:
Cursor — Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"ai-hr-management-toolkit": {
"command": "npx",
"args": ["-y", "mcp-ai-hr-management-toolkit"]
}
}
}
VS Code Copilot — Create .vscode/mcp.json in your project root:
{
"servers": {
"ai-hr-management-toolkit": {
"command": "npx",
"args": ["-y", "mcp-ai-hr-management-toolkit"]
}
}
}
VS Code users: Run the
npxcommand from a directory that contains apackage.json(i.e. any project root). Thecwdkey in.vscode/mcp.jsoncan override the working directory if needed.
Windsurf / other MCP clients — Use the same npx pattern above.
Works from any project directory (requires a package.json in the working directory):
{
"mcpServers": {
"ai-hr-management-toolkit": {
"command": "npx",
"args": ["-y", "mcp-ai-hr-management-toolkit"]
}
}
}
Install once, use from any directory:
npm install -g mcp-ai-hr-management-toolkit
{
"mcpServers": {
"ai-hr-management-toolkit": {
"command": "mcp-ai-hr-management-toolkit",
"args": []
}
}
}
Deploy the Next.js app and use the Streamable HTTP transport:
https://your-domain.com/api/mcp
Test locally:
npx @modelcontextprotocol/inspector http://localhost:3000/api/mcp
git clone <repo-url>
cd Resume-parser
npm install
npm run dev
Web UI at http://localhost:3000. MCP endpoint at http://localhost:3000/api/mcp. No .env needed — configure API keys in the UI or pass them per tool call.
All tools return structured JSON with next_steps hints so the AI knows what to call next.
| Tool | What it does | AI? |
|---|---|---|
parse_resume |
Parse PDF / DOCX / TXT / MD / URL → raw text + contacts, keywords, section map | No |
batch_parse_resumes |
Parse up to 20 files in one call, full pipeline on each | No |
inspect_pipeline |
Run the 5-stage analysis pipeline → confidence scores, entity counts, data quality report | No |
| Tool | What it does | AI? |
|---|---|---|
analyze_resume |
Master analysis tool with selectable aspects: keywords (TF-IDF + bigrams), patterns (date ranges, metrics, team sizes, career trajectory), entities (NER with 12 types + context disambiguation), skills (13 categories with proficiency estimation), experience (structured timeline), similarity (cosine, Jaccard, TF-IDF overlap vs. job description), or all |
No |
analyze_resumeconsolidates what were previously 7 separate tools (extract_keywords,detect_patterns,classify_entities,extract_skills_structured,extract_experience_structured,compute_similarity,analyze_resume_comprehensive) into a single entry point with aspect selection.
| Tool | What it does | AI? |
|---|---|---|
assess_candidate |
Score against up to 8 weighted criteria axes → weighted total + pass / review / reject decision | Optional |
| Tool | What it does | AI? |
|---|---|---|
export_results |
Export structured parse results to JSON or CSV | No |
send_email |
Send results via SMTP (config passed per call — no server-side secrets stored) | No |
| Tool | What it does | AI? |
|---|---|---|
ats_manage_jobs |
Full CRUD for job postings: create, read, update, delete, list, search by title/department/status | No |
| Tool | What it does | AI? |
|---|---|---|
ats_manage_candidates |
CRUD + analytics: add, update, move stage, bulk-move, filter, rank, compare, recommend stage changes, summarize | No |
ats_analytics |
Unified dashboard + pipeline analytics: stage distribution, conversion rates, avg time-in-stage, bottleneck detection, offer acceptance rate | No |
ats_search |
Global full-text search across all ATS entities (candidates, jobs, interviews, offers, notes) | No |
| Tool | What it does | AI? |
|---|---|---|
ats_schedule_interview |
Create, update, and delete interviews with conflict detection and interviewer availability check | No |
ats_interview_feedback |
Submit structured feedback, compute consensus score, summarize feedback across all interviewers | No |
| Tool | What it does | AI? |
|---|---|---|
ats_manage_offers |
Full offer lifecycle: draft → pending → approved → sent → accepted / declined / expired | No |
ats_manage_notes |
Add, update, search, and delete timestamped candidate notes | No |
| Tool | What it does | AI? |
|---|---|---|
ats_compliance |
EEO/EEOC reporting, GDPR export/erasure, audit trail, data retention policies | No |
ats_talent_pool |
Passive candidate talent pools (CRM): create pools, add/remove candidates, search, analytics | No |
ats_scorecard |
Structured interview scorecards with weighted criteria, per-evaluator scores, aggregate rankings | No |
ats_onboarding |
Post-hire onboarding checklists: tasks by category, assignees, progress tracking, overdue alerts | No |
ats_communication |
Email templates with {{variable}} interpolation, send/preview, communication history, stats |
No |
| Tool | What it does | AI? |
|---|---|---|
ats_generate_demo_data |
Generate a realistic sample ATS dataset (jobs, candidates, interviews, offers) for testing | No |
assess_candidateoptionally calls an LLM when you supplyprovider+apiKey; it falls back to fully algorithmic scoring otherwise.
You: "Parse this resume and tell me if they're a good fit for our Senior Engineer role"
AI → parse_resume(file)
→ raw text, contact info, section map
AI → inspect_pipeline(rawText)
→ 5-stage confidence scores, entity classification
AI → analyze_resume(text, aspects=["skills", "patterns", "similarity"], jobDescription=...)
→ 13 skill categories with proficiency levels
→ career trajectory, metrics, date ranges
→ cosine 0.74, skill match 82%, gap analysis
AI synthesizes → "Strong match. 6 of 8 required skills present.
Two gaps: Kubernetes and system design at scale.
Recommend: Technical Screen"
Every resume runs through a 5-stage algorithmic pipeline:
┌─────────────┐ ┌──────────────┐ ┌──────────────┐ ┌────────────────┐ ┌───────────────┐
│ Ingestion │───▶│ Sanitization │───▶│ Tokenization │───▶│ Classification │───▶│ Serialization │
│ (file/URL) │ │ (noise trim) │ │ (TF-IDF) │ │ (NER + disamb) │ │ (structured) │
└─────────────┘ └──────────────┘ └──────────────┘ └────────────────┘ └───────────────┘
ResumeSchema with confidence scores and data quality metrics| Format | Extensions | Parser |
|---|---|---|
.pdf |
pdf-parse v2 | |
| DOCX | .docx |
mammoth |
| Plain text | .txt |
direct read |
| Markdown | .md, .markdown |
regex-based |
| URL / HTML | any URL string | cheerio |
Max file size: 10 MB
contact — name, email, phone, location, LinkedIn, GitHub, website, portfolio
summary — professional summary text
skills[] — name, category (13 types), proficiency, usage context
experience[] — company, title, start/end dates, highlights, achievements (with metrics), technologies
education[] — institution, degree, field, dates, GPA
certifications[] — name, issuer, date, credential URL
projects[] — name, description, URL, technologies, highlights
languages[] — spoken language and proficiency
The app ships with a full web interface:
| Tab | Description |
|---|---|
| Single Parse | Upload one file or paste a URL. Returns structured data, pipeline visualization, and AI-enhanced analysis |
| Batch Parse | Upload up to 20 files. Export to JSON / CSV / PDF or email results |
| Chat | Conversational interface with tool access — ask questions about any parsed resume |
| ATS | Full pipeline board: jobs, candidates (Kanban), interviews, offers, and analytics dashboard |
Switch AI providers from the selector at the top. Supports OpenAI, Anthropic, Google, DeepSeek, GLM, Qwen, OpenRouter, and OpenCode Zen.
All endpoints accept multipart/form-data with optional headers:
| Header | Description |
|---|---|
x-api-key |
Your AI provider API key |
x-ai-provider |
openai / anthropic / google / deepseek / glm / qwen / openrouter / opencodezen |
x-ai-model |
Specific model ID |
# Parse a single resume
curl -X POST http://localhost:3000/api/parse \
-H "x-api-key: sk-..." \
-F "[email protected]"
# Batch parse (up to 20 files)
curl -X POST http://localhost:3000/api/batch-parse \
-H "x-api-key: sk-..." \
-F "[email protected]" \
-F "[email protected]"
# MCP endpoint (Streamable HTTP)
curl -X POST http://localhost:3000/api/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
# Export parsed data
curl -X POST http://localhost:3000/api/export \
-H "Content-Type: application/json" \
-d '{"format":"csv","results":[...]}'
| Layer | Technologies |
|---|---|
| Framework | Next.js 16 (App Router, Turbopack), React 19, TypeScript |
| AI | Vercel AI SDK v6, multi-provider (OpenAI, Anthropic, Google, DeepSeek, GLM, Qwen, OpenRouter) |
| MCP | @modelcontextprotocol/sdk v1.29 — Streamable HTTP + stdio transports |
| Parsing | pdf-parse v2, mammoth, cheerio |
| NLP | TF-IDF, NER, cosine similarity, Jaccard index (all in-process, no external services) |
| Schema | Zod v4 |
| Export | ExcelJS (CSV/XLSX), jsPDF + jspdf-autotable |
| Nodemailer | |
| Styling | Tailwind CSS v4, Framer Motion |
npm install
# Start dev server (Web UI at :3000 + MCP at /api/mcp)
npm run dev
# Build the standalone MCP CLI (stdio transport)
npm run build:mcp
# Build the Next.js app for production
npm run build
# Test MCP with the official inspector
npx @modelcontextprotocol/inspector http://localhost:3000/api/mcp
npx @modelcontextprotocol/inspector node dist/mcp-stdio.js
# Lint
npm run lint
src/
├── app/
│ ├── page.tsx # Main UI (tabs, provider selector, chat, ATS)
│ ├── layout.tsx # Root layout + global styles
│ └── api/
│ ├── parse/route.ts # Single resume parse
│ ├── batch-parse/route.ts
│ ├── chat/route.ts # Conversational AI with tool access
│ ├── mcp/route.ts # MCP server (Streamable HTTP)
│ ├── models/route.ts # Provider model listing
│ ├── export/route.ts # JSON / CSV / PDF export
│ └── email/route.ts # SMTP email
├── components/ # React UI components (parse, batch, chat, ATS)
│ └── ats/ # ATS-specific views (Kanban, Dashboard, Scheduler…)
└── lib/
├── ai-model.ts # Multi-provider model config (no env fallback)
├── mcp-server.ts # MCP server — registers all 21 tools
├── schemas/
│ ├── resume.ts # Zod v4 ResumeSchema
│ └── criteria.ts # Assessment criteria schema
├── analysis/
│ ├── pipeline.ts # 5-stage pipeline orchestrator
│ ├── sanitizer.ts # Text cleaning
│ ├── keyword-extractor.ts # TF-IDF
│ ├── classifier.ts # NER with context disambiguation
│ ├── pattern-matcher.ts # Regex extraction (metrics, dates, contacts)
│ └── scoring.ts # Cosine similarity, Jaccard, skill matching
├── parser/
│ ├── pdf.ts, docx.ts, text.ts, markdown.ts, url.ts
│ └── index.ts
├── ats/
│ ├── types.ts # ATS entity types
│ ├── store.ts # In-memory ATS state
│ ├── demo-data.ts # Realistic seed data generator
│ └── context.tsx # React context for ATS state
└── tools/
├── parse-resume.ts # parse_resume
├── inspect-pipeline.ts # inspect_pipeline
├── export-results.ts # export_results
├── send-email.ts # send_email
└── mcp/ # 17 MCP-specific tools
├── analyze-resume.ts # analyze_resume (unified: keywords, patterns, entities, skills, experience, similarity)
├── batch-parse.ts # batch_parse_resumes
├── assess-candidate.ts # assess_candidate
├── ats-manage-candidates.ts # ats_manage_candidates (includes rank/filter/compare/summarize)
├── ats-manage-jobs.ts
├── ats-manage-offers.ts
├── ats-manage-notes.ts
├── ats-analytics.ts # ats_analytics (unified dashboard + pipeline)
├── ats-schedule-interview.ts
├── ats-interview-feedback.ts
├── ats-search.ts
├── ats-generate-demo-data.ts
├── ats-compliance.ts # Enterprise: EEO / GDPR / audit
├── ats-talent-pool.ts # Enterprise: passive candidate CRM
├── ats-scorecard.ts # Enterprise: structured scorecards
├── ats-onboarding.ts # Enterprise: onboarding checklists
└── ats-communication.ts # Enterprise: email templates & history
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"xjtlumedia-ai-hr-management-toolkit": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also