loading…
Search for a command to run...
loading…
Feedback collection for AI agents. Create surveys from JSON schema, collect structured responses from groups of people, and retrieve aggregated results. 4 MCP t
Feedback collection for AI agents. Create surveys from JSON schema, collect structured responses from groups of people, and retrieve aggregated results. 4 MCP tools for long-horizon agent workflows: post-event feedback, product launches, team health checks.
Website: humansurvey.co · Docs: humansurvey.co/docs · FAQ: humansurvey.co/faq
Feedback collection infrastructure for AI agents.
HumanSurvey lets an agent doing long-horizon work collect structured feedback from a group of people:
Agent is doing a job
→ needs structured feedback from a group
→ creates survey from JSON schema
→ shares /s/{id} URL with respondents
→ humans respond over hours or days
→ agent retrieves structured JSON results and acts on them
HumanSurvey is a minimal API and MCP server for one narrow job: let agents collect structured feedback from groups of humans and get machine-usable results back.
It is designed for:
It is not designed for:
choice, text, scale, matrixshowIf in Markdown and JSON schemasingle_choicemulti_choicetextscalematrix# Survey Title
**Description:** Instructions for the respondent.
## Section Name
**Q1. Your question here?**
- ☐ Option A
- ☐ Option B
- ☐ Option C
**Q2. Multi-select question?** (select all that apply)
- ☐ Choice 1
- ☐ Choice 2
- ☐ Choice 3
**Q3. Open-ended question:**
> _______________
| # | Item | Rating |
|---|------|--------|
| 1 | Item A | ☐Good ☐OK ☐Bad |
| 2 | Item B | ☐Good ☐OK ☐Bad |
Scale questions:
**Q4. How severe is this issue?**
[scale 1-5 min-label="Low" max-label="Critical"]
Conditional logic:
**Q1. Did the deploy fail?**
- ☐ Yes
- ☐ No
**Q2. Which step failed?**
> show if: Q1 = "Yes"
> _______________________________________________
Add to your Claude Code config (~/.claude.json):
{
"mcpServers": {
"survey": {
"command": "npx",
"args": ["-y", "humansurvey-mcp"],
"env": {
"HUMANSURVEY_API_KEY": "hs_sk_your_key_here"
}
}
}
}
Then in Claude Code:
> Create a post-event feedback survey with a 1-5 rating, open text, and a yes/no question
Available tools:
create_key — self-provision an API key; no human setup requiredcreate_survey — create from JSON schema; optional max_responses, expires_at, webhook_urlget_results — aggregated results + raw responseslist_surveys — list surveys owned by your keyclose_survey — close a survey immediatelycurl -X POST https://www.humansurvey.co/api/keys \
-H "Content-Type: application/json" \
-d '{
"name": "my claude agent",
"email": "[email protected]",
"wallet_address": "eip155:8453:0xabc..."
}'
All fields optional. wallet_address uses CAIP-10 format — will be used for agent-native payments in the future.
Then create a survey:
curl -X POST https://www.humansurvey.co/api/surveys \
-H "Authorization: Bearer hs_sk_..." \
-H "Content-Type: application/json" \
-d '{
"schema": {
"title": "Post-Event Feedback",
"sections": [{
"questions": [
{ "type": "scale", "label": "How would you rate the event?", "min": 1, "max": 5 },
{ "type": "text", "label": "What should we improve?" }
]
}]
}
}'
Response:
{
"survey_url": "/s/abc123",
"question_count": 1
}
Read results:
curl https://www.humansurvey.co/api/surveys/abc123/responses \
-H "Authorization: Bearer hs_sk_..."
https://www.humansurvey.co/docshttps://www.humansurvey.co/api/openapi.jsonhttps://www.humansurvey.co/llms.txt| Component | Technology |
|---|---|
| Framework | Next.js (App Router) |
| Database | Neon (serverless Postgres) |
| Parser | remark (unified ecosystem) |
| Frontend | React + Tailwind CSS |
| MCP Server | @modelcontextprotocol/sdk |
| Deployment | Vercel |
├── apps/web/ # Next.js app (API + frontend)
├── packages/parser/ # Markdown → Survey JSON parser
├── packages/mcp-server/ # MCP server for Claude Code
└── docs/ # Architecture docs
Read CONTRIBUTING.md before opening a PR. The most important rule is scope discipline: new UI variants, analytics dashboards, and human-operator features are usually out of scope.
pnpm install
pnpm dev # Start Next.js dev server
pnpm --filter @mts/parser test
pnpm build # Build all packages
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"sunsiyuan-human-survey": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also