loading…
Search for a command to run...
loading…
Extracts and manages job postings from HackerNews 'Who's Hiring' threads, enabling users to search and analyze listings through Claude Desktop. It provides tool
Extracts and manages job postings from HackerNews 'Who's Hiring' threads, enabling users to search and analyze listings through Claude Desktop. It provides tools for keyword-based job searches and detailed post retrieval while utilizing a file-based caching system.
A Python-based job scraper that extracts job postings from HackerNews "Who's Hiring" threads and exposes them via an MCP (Model Context Protocol) server for Claude Desktop integration.
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the scraper directly
python scraper.py
This will scrape the default HackerNews thread and show how many job postings were found.
Add the following to your Claude Desktop settings (usually at ~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"hn-job-scraper": {
"command": "python3",
"args": ["/path/to/your/project/mcp_server.py"],
"env": {
"PYTHONPATH": "/path/to/your/project"
}
}
}
}
Important: Replace /path/to/your/project with the actual path to this project directory.
Once configured, you can use these tools in Claude Desktop:
search_jobs - Search job postings by keywords (e.g., "python", "remote", "senior")get_job_details - Get full details of a specific job postingrefresh_jobs - Clear cache and fetch fresh job data# Activate virtual environment
source venv/bin/activate
# Run scraper
python scraper.py
# Start MCP server (usually called by Claude Desktop)
python mcp_server.py
Once configured, you can ask Claude Desktop:
hackernews-jobscraper/
├── scraper.py # Core scraping functionality
├── mcp_server.py # MCP server implementation
├── requirements.txt # Python dependencies
├── claude_desktop_config.json # Example Claude Desktop config
├── cache/ # Cached job data (created automatically)
└── README.md # This file
The scraper defaults to thread ID 44434574. You can change this by modifying the scrape_job_postings() call in scraper.py.
Cache files are stored in the cache/ directory and expire after 1 hour. This can be modified in the HackerNewsScraper class.
claude_desktop_config.json are absolute pathsrefresh_jobs tool# Test scraper functionality
python scraper.py
# Test MCP server (requires Claude Desktop or MCP client)
python mcp_server.py
The project is designed to be extensible. You can:
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"hackernews-job-scraper": {
"command": "npx",
"args": []
}
}
}