loading…
Search for a command to run...
loading…
Native Windows MCP server that gives AI agents full desktop control. 45+ tools using UIAutomation + OCR with automatic dark theme enhancement instead of screens
Native Windows MCP server that gives AI agents full desktop control. 45+ tools using UIAutomation + OCR with automatic dark theme enhancement instead of screenshots. Features batch action sequencing (run_sequence), window occlusion detection, menu navigation, PrintWindow capture, and embedded AI workflow instructions. Single .NET 8 executable — no Python, no Node, no Selenium.
Release License: MIT .NET 8 MCP Windows
Give AI assistants eyes and hands. A native Windows MCP server that lets AI see the screen, read UI elements, click buttons, type text, and control any application — with built-in OCR, dark theme support, window occlusion detection, and batch action sequencing.
Built for Claude Code by Leia Enterprise Solutions for the Orbination project.
AI coding assistants are blind. They generate code but can never see the result. They can't compare a design mockup to a running app. They can't click through a UI to test it. This server fixes that.
This MCP server bridges the gap between AI and your desktop. Instead of working blind with just text, the AI can:
run_sequenceclick_element OCR Fallback — UIAutomation first, then OCR for dark themes, web apps, iframesrun_sequence — Batch multiple UI actions (click, type, paste, hotkey, wait, focus, OCR click) in a single MCP callclick_menu_item — Navigate parent > child menus with smooth mouse movement to keep submenus openAI Client (Claude Code / Claude Desktop)
│
│ MCP / stdio
▼
┌─────────────────────────────┐
│ MCP Server │
│ (ServerInstructions) │
└─────────┬───────────────────┘
│
┌─────────┼──────────────────────────────────────┐
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ │
Mouse Keyboard Screen Vision Composite │
Tools Tools Tools Tools Tools │
│ │ │ │
┌────────┼──────────┼──────────┘ │
▼ ▼ ▼ │
Win32 UIAuto- OcrService │
Native mation (dark theme) │
│ │ │
▼ ▼ │
DesktopScanner NativeInput │
(occlusion, (SendInput, │
regions) clipboard) │
│ │ │
└───────┬───────┘ │
▼ │
Windows OS │
(Desktop, Windows, Apps) │
└────────────────────────────────────────────────┘
Single native .NET 8 executable. No Python. No Node.js. No browser drivers. Direct Windows API access.
cd DesktopControlMcp
dotnet build -c Release
Or publish as a single file:
dotnet publish -c Release -r win-x64 --self-contained false
Add the MCP server to your Claude Code configuration:
claude mcp add desktop-control -- "C:\path\to\DesktopControlMcp.exe"
Or add it manually to your MCP config file:
{
"mcpServers": {
"desktop-control": {
"command": "C:\\path\\to\\DesktopControlMcp\\bin\\Release\\net8.0-windows\\DesktopControlMcp.exe",
"args": []
}
}
}
| Tool | Description |
|---|---|
scan_desktop |
Full desktop scan — screens, windows with visibility %, UI elements, desktop regions, taskbar |
list_windows |
List all visible windows with titles, process names, visibility %, occlusion status |
get_window_details |
Get all UI elements in a window (filter by kind: button, input, text, etc.) |
find_element |
Search for a UI element by text across all windows |
read_window_text |
Extract all visible text from a window |
refresh_window |
Re-scan a single window's elements (faster than full scan) |
| Tool | Description |
|---|---|
click_element |
Find element by text and click — UIAutomation first, OCR fallback for dark themes/web apps |
type_in_element |
Find an input field and type text (ValuePattern, clipboard paste, or click+type fallback) |
interact |
Smart interaction — auto-detects element type and performs the right action |
fill_form |
Fill multiple form fields in one call with JSON field:value pairs |
select_tab |
Select a browser or application tab by text |
click_menu_item |
Navigate menus: click parent, smooth-move to child, click — single call |
| Tool | Description |
|---|---|
run_sequence |
Execute multiple UI actions in ONE call: click, type, paste, hotkey, wait, focus, OCR click, screenshot |
click_and_type |
Click at position then type text |
focus_and_hotkey |
Click to focus (e.g. iframe) then send keyboard shortcut atomically |
| Tool | Description |
|---|---|
mouse_click |
Click at screen coordinates |
mouse_move |
Move cursor to position |
mouse_move_smooth |
Move mouse smoothly (keeps menus/submenus open) |
mouse_drag |
Drag from one position to another |
mouse_scroll |
Scroll the mouse wheel |
mouse_get_position |
Get current cursor position |
keyboard_type |
Type text (supports Unicode) |
keyboard_press |
Press a single key |
keyboard_hotkey |
Press key combinations (Ctrl+C, Alt+Tab, etc.) |
keyboard_key_down / keyboard_key_up |
Hold and release keys |
| Tool | Description |
|---|---|
focus_window |
Bring a window to the foreground |
maximize_window |
Maximize a window |
minimize_window |
Minimize a window |
restore_window |
Restore a minimized/maximized window |
open_app |
Open an app by name (focuses existing, clicks taskbar, or searches Start) |
navigate_to_url |
Navigate a browser to a URL |
| Tool | Description |
|---|---|
screenshot_to_file |
Full screenshot across all monitors |
screenshot_region |
Screenshot a specific screen region |
screenshot_window |
Capture a window via PrintWindow API (works even when obscured) |
get_screen_info |
Get monitor layout (positions, sizes, primary) |
ocr_screen_region |
Capture a region and run OCR — auto-enhances dark themes |
ocr_window |
Run OCR on an entire window — reads all text with click coordinates |
ocr_find_text |
Search for specific text on screen using OCR — returns click coordinates |
| Tool | Description |
|---|---|
set_clipboard |
Set clipboard text without pasting |
paste_text |
Paste large text via clipboard (XML, code, multi-line) |
auto_scroll |
Scroll with pauses between batches |
wait_seconds |
Pause between actions |
wait_for_element |
Poll for UI element to appear with timeout |
The server sends tool usage guidelines automatically on every MCP connection via ServerInstructions. This teaches AI clients the optimal workflow without requiring any configuration files:
Observation Priority: ocr_window > get_window_details > list_windows > scan_desktop > screenshot_to_file
Action Priority: click_element > click_menu_item > run_sequence > paste_text > mouse_click
The key insight: OCR and UIAutomation return exact text and coordinates — the AI knows exactly what to click. Screenshots require vision processing and guessing. OCR-first workflows are faster, cheaper, and more reliable.
The server uses a grid-based occlusion analysis (24px cells) to determine which windows are truly visible:
Chrome (chrome) [71] @ -2060,-1461 3456x1403 ← 100% visible
VS Code (Code) [45] @ -1500,-800 1200x900 ← 65% visible
Explorer (explorer) [20] @ -1400,-700 800x600 ← 0% visible [OCCLUDED]
The AI knows which windows it can interact with and which are hidden. Combined with desktop region detection (flood-fill to find uncovered screen areas), the AI has a complete spatial understanding of the desktop.
Many modern apps use dark themes where standard OCR fails. The server automatically detects dark backgrounds and enhances images before OCR:
This works automatically on ocr_window, ocr_screen_region, ocr_find_text, and click_element's OCR fallback.
Full multi-monitor support out of the box with per-monitor DPI awareness:
get_screen_infoscreenshot_to_file captures all screens, screenshot_region targets any regionShell_TrayWnd (primary) and Shell_SecondaryTrayWnd (secondary monitors)Unlike screenshot-based tools that guess what's on screen, this server reads the actual UI element tree exposed by Windows. Every button, input field, text label, tab, and checkbox is detected with:
click_element combines both strategies. UIAutomation first (fast, structured), OCR fallback (universal):
click_element "Save"
→ UIAutomation: found "Save" button → click via Invoke pattern ✓
click_element "OK" (dark web dialog)
→ UIAutomation: not found
→ OCR: capture window → enhance dark theme → find "OK" text → click center ✓
Applications that render their own UI canvas (Flutter, Electron with custom rendering, game engines) may expose fewer elements to UIAutomation. The OCR fallback handles these cases automatically.
Every MCP tool call costs tokens. This server is engineered to minimize token usage:
Most desktop automation tools send full screenshots for every action — each one costs thousands of tokens. This server returns compact structured text:
[button] "Save" @ 450,320
[input] "Search..." @ 200,60
[tab-item] "Settings" @ 120,35
run_sequence executes multiple actions in one call (click, type, paste, hotkey, wait, focus)fill_form fills multiple form fields in a single callscan_desktop returns screens + windows + elements + taskbar in one responseclick_menu_item navigates parent > child menus in one callScan results are cached for 30 seconds. Individual windows can be refreshed with refresh_window instead of a full scan_desktop. The scanner uses UIAutomation's CacheRequest to batch-fetch all properties in a single cross-process call.
DesktopControlMcp/
├── Program.cs # MCP server entry + DPI awareness + ServerInstructions
├── NativeInput.cs # Low-level mouse/keyboard via SendInput
├── Native/
│ └── Win32.cs # P/Invoke: EnumWindows, PrintWindow, window management
├── Models/
│ └── SceneData.cs # Data models: windows (with occlusion), elements, regions
├── Services/
│ ├── DesktopScanner.cs # Desktop scanning + occlusion analysis + region detection
│ ├── OcrService.cs # Shared OCR engine with dark theme auto-enhancement
│ └── UiAutomationHelper.cs # Element interaction patterns
└── Tools/
├── VisionTools.cs # scan, find, click (with OCR fallback), list windows
├── CompositeTools.cs # run_sequence, click_menu_item, navigate, open app
├── MouseTools.cs # Mouse control
├── KeyboardTools.cs # Keyboard control
└── ScreenTools.cs # Screenshots, OCR tools, PrintWindow capture
See the examples/ folder for real-world workflows:
Option A: Download pre-built binary
claude mcp add desktop-control -- "C:\path\to\DesktopControlMcp.exe"
Option B: Build from source
git clone https://github.com/amichail-1/Orbination-AI-Desktop-Vision-Control.git
cd Orbination-AI-Desktop-Vision-Control/DesktopControlMcp
dotnet build -c Release
claude mcp add desktop-control -- "bin\Release\net8.0-windows\DesktopControlMcp.exe"
Contributions welcome. Open an issue or submit a PR.
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"orbination-ai-desktop-vision-control": {
"command": "npx",
"args": []
}
}
}