loading…
Search for a command to run...
loading…
GitHub-backed registry and CLI for local AI resources. Search, pull, and submit reusable skills, tools, templates, datasets, and workflow artifacts.
GitHub-backed registry and CLI for local AI resources. Search, pull, and submit reusable skills, tools, templates, datasets, and workflow artifacts.
GitHub-backed registry and CLI for local AI resources. Search, pull, and submit reusable skills, tools, templates, datasets, and workflow artifacts.
pip install pullnexus
Search the registry:
pullnexus search "fine-tune 35B on consumer GPU"
pullnexus search "local agent loop" --type skill
Browse what's available:
pullnexus list-skills --type dataset
pullnexus pull local-rag-starter-pack
Start a submission:
pullnexus submit --interactive --type skill
# or open an issue: github.com/MRWillisT/PullNexus/issues/new/choose
Current registry: 156 resources across 10 resource types: skills, tools, templates, playbooks, policies, prompts, datasets, environments, evals, and repositories.
Local models are improving quickly, but the surrounding workflow is still fragmented. Useful prompts, JSONL examples, tool definitions, templates, hardware notes, and evaluation sets are scattered across repositories, gists, and chat logs.
PullNexus packages those artifacts into a consistent registry format that can be searched from a CLI today and extended by other clients over time. The practical goal is simple: find a relevant resource, inspect its metadata, pull it if it is installable, or use it as reference material for your own local setup.
It is a small, open distribution layer for local AI workflows rather than a model host, hosted agent platform, or prompt gallery.
Several pieces have matured at the same time:
That makes a typed, pull-oriented registry useful even at a modest scale.
skills/index.json with typed metadata.Resources do not need to start as "skills." A useful JSONL conversation set, deployment playbook, model template, policy document, tool reference, or environment profile can all go through the same registry format.
In practice, many entries come from real project work: something becomes reusable, gets documented, and is added to the registry with metadata and optional supporting files.
Current public commands:
pullnexus search rust debugger --type skill
pullnexus list-skills --category automation
pullnexus pull local-rag-starter-pack
pullnexus submit --interactive --type playbook
Some resource types are installable file packages; others are reference entries that point to external repositories, datasets, or documentation.
Here's the folder structure for a skill — this is what people submit:
skills/python-advanced-debugging/
├── skill.json → Metadata (name, description, tags, version, license)
├── examples.jsonl → JSONL conversation pairs or training examples
├── README.md → Human-readable explanation and usage notes
├── eval.jsonl → Test cases to verify the skill behaves as expected
└── tools/ → Optional MCP tool definitions
Example skill.json:
{
"name": "python-advanced-debugging",
"version": "1.2.0",
"description": "Expert techniques for memory leaks, pdb, and tracing in Python",
"tags": ["python", "debugging", "development"],
"license": "CC0-1.0",
"evaluation_cases": 12,
"mcp_compatible": true
}
The structure is intentionally plain so that review, reuse, and validation stay straightforward.
Any AI assistant connected to the PullNexus MCP server can use the live registry mid-conversation instead of replying from generic background knowledge alone.
Example prompt for a strong before/after demo:
I’m building a fully local RAG pipeline for PDFs with Ollama. Retrieval quality is bad, chunking feels wrong, and I want something concrete I can inspect or install. What should I use?
Without PullNexus, most assistants give broad advice about RAG frameworks and chunk sizing. With PullNexus MCP connected, the assistant can search the live registry, recommend concrete resources like local-rag-starter-pack and rag-eval-baseline, and offer to pull them into a local folder immediately.
Other good demo prompts are ones that include a real stack plus a failure mode: local RAG debugging, agent orchestration, MCP integration, or Python debugging all work well.
PullNexus sits between raw repositories and full platform products. It is not trying to host models or replace agent runtimes. Its job is narrower: keep reusable local-AI resources in one searchable format with enough metadata to make them easy to discover and reuse.
| Platform | Primary focus | Gap PullNexus addresses |
|---|---|---|
| HuggingFace | Models and datasets | Not organized around smaller local-AI workflow artifacts |
| OpenSkills | Hosted skills ecosystem | Not open, repo-native, or local-first |
| Agent toolkits | Runtime and tool frameworks | Do not solve registry/discovery for reusable resources |
| PullNexus | Registry for local-AI resources | Early-stage project focused on schema, search, and contribution flow |
| Challenge | Mitigation |
|---|---|
| Quality | Stars, reviews, test cases, curation queue |
| Spam | GitHub workflow + signing |
| Incentives | Attribution, contributor history, and reusable outputs |
| Legal | Clear CC0/MIT contribution license + provenance tracking |
PullNexus is currently maintained by one person. Decisions about the registry format, contribution rules, and moderation are made openly in the repository through issues and pull requests. That keeps the project straightforward: fast decisions, public rationale, and a clear paper trail.
If the project grows into a true multi-maintainer effort, governance can expand into a lightweight maintainer model with documented roles and decision rules. For now, the priority is simple: keep the standards clear, keep the process public, and keep the project useful.
A few entry points worth knowing about:
autonomous-agent-training-pack — 160+ synthetic JSONL examples, 16 themes, ready-to-use train/val/test splitssynthetic-general-training-pack — 110+ general-purpose training examples for coding, reasoning, docs, and webagent-role-orchestrator / agent-role-coder / agent-role-reviewer — system prompts for a full multi-agent local setuplocal-agent-system-blueprint — beginner guide to building a local autonomous agent systemmulti-agent-roles-template — JSON role config for a 5-agent local system out of the boxvibe-coder-workflow — the full self-taught builder loop, from vague idea to working codeqwen3-35b-12gb-llama-server — community-contributed llama-server config for Qwen3 on 12GB VRAMkv-cache-vram-best-practices — VRAM optimization policy for KV cache tuningn8n-mcp-workflows / autonomous-agent-payments — MCP ecosystem entriesSearch the registry to browse all 155 entries: pullnexus list-skills
| Area | Next step |
|---|---|
| CLI | Expose additional packaged commands more consistently and align help text with the public surface |
| Registry | Keep expanding coverage across the 9 supported resource types while tightening metadata quality |
| Docs | Add clearer integration guidance for CLI, MCP/server usage, and contribution paths |
| Review | Improve validation, compatibility reporting, and contributor feedback loops |
| Discovery | Add better browsing, filtering, and categorization around the live index |
PullNexus is maintained by a developer who has spent roughly a year and a half working on real AI-assisted projects. The project grew out of repeated reuse problems: useful prompts, JSONL examples, deployment notes, and tool references kept showing up in ad hoc formats with no clear place to standardize them.
That is why the repository is biased toward practical artifacts and plain files. The goal is not to present a grand platform vision first; it is to make reusable local-AI material easier to package, review, find, and use.
Search the registry. Pull what fits. Submit what helped.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"pullnexus": {
"command": "npx",
"args": []
}
}
}