loading…
Search for a command to run...
loading…
Enables Python code linting by integrating the Model Context Protocol with tools like pylint and the OpenAI API. It allows for dynamic tool discovery and uses L
Enables Python code linting by integrating the Model Context Protocol with tools like pylint and the OpenAI API. It allows for dynamic tool discovery and uses LLMs to orchestrate tool selection and provide refined analysis of linting results.
This project demonstrates the use of the Model Context Protocol (MCP) to enable linting capabilities for Python code. The MCP framework allows seamless integration between tools and LLMs, where tools perform the actual work, and LLMs orchestrate and interpret the results.
Client → MCP Server → Lint Engine → OpenAI API
This project demonstrates a real MCP-based linting scenario where the LLM is used for both tool selection and result refinement. The workflow ensures that the MCP server handles the actual linting, while the LLM orchestrates and interprets the results.
======================================================================
MCP-BASED LINTER
======================================================================
Connecting to MCP server...
✓ Connected to MCP server
✓ Available tools: ['pylint', 'eslint', 'tflint']
LLM selected tool: pylint
======================================================================
STEP 2: Executing selected linter tool
======================================================================
✓ Linter tool executed successfully
Raw linting results:
----------------------------------------------------------------------
[{'line': 1, 'message': 'Missing docstring', 'severity': 'warning'}]
----------------------------------------------------------------------
======================================================================
STEP 3: Refining linting results with LLM
======================================================================
✓ LLM refined the linting results
Refined analysis:
- Line 1: Missing docstring. Add a docstring to describe the function's purpose.
pip install -r requirements.txt
.env file:OPENAI_API_KEY=your-api-key
python server/mcp_server.py
python client/mcp_client.py
======================================================================
MCP-BASED LINTER
======================================================================
Connecting to MCP server...
✓ Connected to MCP server
✓ Available tools: ['analyze_python_code']
======================================================================
STEP 2: Executing linter tool directly
======================================================================
✓ Linter tool executed successfully
Raw linting results:
----------------------------------------------------------------------
[{'line': 1, 'message': 'Missing docstring', 'severity': 'warning'}]
----------------------------------------------------------------------
.
├── README.md
├── requirements.txt
├── client/
│ ├── mcp_client.py # Client implementation
├── server/
│ ├── mcp_server.py # MCP server implementation
│ ├── lint_engine.py # Linting logic
│ └── rules_config.json # Configurable linting rules
└── sample-code/
└── bad_code.py # Sample code for testing
In an enterprise-grade implementation, direct access to OpenAI models is typically restricted. Instead, organizations use a secure and standardized wrapper or gateway to interact with AI models. For example, in this project, we are using a personal OpenAI API key for demonstration purposes. However, in a production environment, the following changes would be made:
Model Access:
GenCore to interact with AI models securely.Authentication and Authorization:
.env file.Infrastructure:
stdio.Monitoring and Logging:
In an enterprise setup, the lint_engine.py would be updated to use GenCore instead of directly calling OpenAI. Here’s an example:
# Example: Using GenCore Wrapper
from gencore import GenCoreClient
def analyze_code_with_gencore(code):
client = GenCoreClient()
response = client.analyze_code(
model="gpt-4-enterprise",
code=code
)
return response["analysis"]
By adopting these practices, the MCP Python Linter can be scaled and secured for enterprise use cases.
In an enterprise-grade implementation, it is common to support multiple tools for different use cases, such as eslint for JavaScript, pylint for Python, and tflint for Terraform. Below is an approach to handle this scenario effectively:
Single vs Multiple MCP Servers:
Tool Registration:
@app.register_tool("eslint", description="Lint JavaScript code")
@app.register_tool("pylint", description="Lint Python code")
@app.register_tool("tflint", description="Lint Terraform code")
Dynamic Tool Discovery:
tools_list = await session.list_tools()
for tool in tools_list.tools:
print(f"Tool: {tool.name}, Description: {tool.description}")
Tool Invocation:
if file_type == "python":
tool_name = "pylint"
elif file_type == "javascript":
tool_name = "eslint"
elif file_type == "terraform":
tool_name = "tflint"
result = await session.call_tool(tool_name, {"code": code_to_analyze})
For Small to Medium Teams:
For Large Enterprises:
graph TD
A[Client] -->|Tool Discovery| B[MCP Server]
B -->|Invoke eslint| C[ESLint]
B -->|Invoke pylint| D[PyLint]
B -->|Invoke tflint| E[TFLint]
By adopting this approach, the MCP Python Linter can be extended to support multiple tools in a scalable and maintainable way.
Below is the flow of how the MCP-based linter integrates into the developer workflow:
graph TD
B[VS Code calls MCP tool]
B --> C[Gateway authenticates request]
C --> D[MCP server lists tools]
D --> E[GenCore sends request & LLM selects tool]
E --> F[Router invokes Python engine]
F --> G[Pylint analyzes code]
G --> H[Raw results returned]
H --> I[GenCore sends results to enterprise LLM]
I --> J[LLM produces explanation]
J --> K[Final result returned to developer]
This workflow ensures that the developer receives detailed and refined feedback on their code, leveraging the power of MCP, enterprise LLMs, and tools like pylint.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mcp-python-linter": {
"command": "npx",
"args": []
}
}
}PRs, issues, code search, CI status
Database, auth and storage
Reference / test server with prompts, resources, and tools.
Secure file operations with configurable access controls.