loading…
Search for a command to run...
loading…
A Python-based microservice that scrapes, deduplicates, and stores fresh job listings from multiple platforms. It enables users to access and filter a live feed
A Python-based microservice that scrapes, deduplicates, and stores fresh job listings from multiple platforms. It enables users to access and filter a live feed of job data through a REST API for integration into portfolio sites and other applications.
A standalone Python microservice that scrapes fresh job listings using Jobspy, stores them in SQLite with deduplication, and exposes a /jobs REST endpoint for embedding in a portfolio site as a live feed.
APScheduler (1hr) → Jobspy Scraper → SQLite (deduped) ← FastAPI /jobs
↕
Portfolio Site (fetch)
cd jobs-mcp-server
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env as needed
python main.py
The server starts at http://localhost:8000. An initial scrape runs automatically in the background.
GET / — Health Check{
"status": "healthy",
"service": "Job Listings MCP Server",
"total_jobs_in_db": 142,
"scrape_interval_hours": 1
}
GET /jobs — List Job ListingsQuery Params:
| Param | Type | Description |
|---|---|---|
location |
string | Filter by location (substring, case-insensitive) |
keyword |
string | Filter by keyword in job title |
hours |
int | Only jobs scraped within the last N hours |
limit |
int | Max results (default 100, max 500) |
offset |
int | Pagination offset |
Example:
curl "http://localhost:8000/jobs?location=San%20Francisco&keyword=AI&hours=24"
Response:
{
"count": 5,
"filters": {
"location": "San Francisco",
"keyword": "AI",
"hours": 24
},
"jobs": [
{
"id": 1,
"job_title": "AI Solutions Engineer",
"company": "Acme Corp",
"location": "San Francisco, CA",
"salary": "USD 120,000–160,000/yearly",
"apply_link": "https://linkedin.com/jobs/...",
"date_posted": "2025-01-15",
"date_scraped": "2025-01-15T12:00:00+00:00",
"source_site": "linkedin",
"role_tier": "T2 — Secondary"
}
]
}
POST /scrape — Manual TriggerTriggers a scrape run in the background.
curl -X POST http://localhost:8000/scrape
GET /status — Last Scrape Statuscurl http://localhost:8000/status
GET /roles — Configured Role Tierscurl http://localhost:8000/roles
mcp-server repo to a new GitHub repo (or subdirectory)./data to persist the SQLite DB.pip install -r requirements.txtpython main.py/data and set DATA_DIR=/data.In your Next.js portfolio, fetch from the deployed URL:
// In a Next.js API route or client component
const API_URL = process.env.NEXT_PUBLIC_JOBS_API_URL || 'https://your-jobs-server.up.railway.app';
async function fetchJobs(filters?: { location?: string; keyword?: string; hours?: number }) {
const params = new URLSearchParams();
if (filters?.location) params.set('location', filters.location);
if (filters?.keyword) params.set('keyword', filters.keyword);
if (filters?.hours) params.set('hours', String(filters.hours));
const res = await fetch(`${API_URL}/jobs?${params.toString()}`);
return res.json();
}
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"job-listings-mcp-server": {
"command": "npx",
"args": []
}
}
}