ProWorkbench Desktop · v1

Local-first AI workbench with approvals and execution control.

ProWorkbench is the controlled AI workspace developers run on their own machine. Bring your own OpenAI-compatible backend, manage your MCP servers, and approve every tool call before it executes. Not SaaS. Not autopilot. You stay in control of every action.

Instant download after purchase  ·  One-time license  ·  30-day guarantee
Local-first Bring your own backend Built for developers
ProWorkbench command center showing approvals queue, tool list, and workspace state
Local-first — your data stays on your machine
No hidden execution — every tool call is approved
Built for developers — not for managers, not for marketers
Open source rootsview on GitHub
The problem with everything else

You did not get into building so you could babysit your tools.

Context-switching tax

You jump between editor, terminal, browser tabs, model UIs, and config files just to get one task done. Half your day disappears into the gaps between tools.

Fragile local AI setups

Local model runners drift. Tool configs break between updates. Every "quick experiment" turns into a half-day of dependency archaeology.

Automation you cannot trust

Cloud agents quietly run tools you did not approve, hit endpoints you did not authorize, and burn tokens you did not budget. You cannot ship that into a real workflow.

Too many tools, no governance

MCP servers, scripts, snippets, vendor SDKs — all scattered. Nothing tells you what is installed, what is allowed, or what just executed.

What ProWorkbench actually is

A controlled workspace for AI work, not a hosted assistant.

What it is Yes

  • A local-first AI workbench you install and run on your own machine.
  • An execution layer with explicit approvals before tools or commands run.
  • A governance surface for MCP servers, tools, and prompts you actually own.
  • A workspace for developers and operators who need real control over what AI does.

What it is not No

  • Not a SaaS — there is no hosted dashboard you have to log into.
  • Not autopilot AI — nothing runs without your explicit approval.
  • Not a hidden agent — every tool call is visible, logged, and reviewable.
  • Not cloud-controlled — your data, prompts, and history stay on your machine.
How it works in practice

Five surfaces. One workbench. Every action visible.

Approvals system

Every tool call surfaces an approval prompt before it runs. Approve once, approve always, or deny. You decide what executes and what does not.

Tool governance

See every tool the workbench has loaded, where it came from, and what permissions it has. Disable anything you do not trust without editing config files.

MCP server manager

Add, configure, enable, and disable Model Context Protocol servers from one place. No more hand-editing JSON in three different locations.

Doctor checks

Run a single command to verify your environment, backend connection, and tool health. Catch broken setups before they waste an afternoon.

Local state and logging

Conversations, tool calls, approvals, and history live in a local SQLite database. Inspectable, exportable, and yours.

Will this work for me?

If you can run an installer and an OpenAI-compatible backend, yes.

Windows · macOS · Linux Native installers for every supported platform
OpenAI-compatible backend OpenAI, Ollama, LM Studio, vLLM, llama.cpp, or any compatible endpoint
Local SQLite storage State, history, and configs live in a single file you own
Why this exists

Built for developers who need control, not promises.

Built around explicit approvals because real workflows do not survive on hidden execution. — Design principle · ProWorkbench
Local-first because your prompts, your tool history, and your code context should never leave your machine without you saying so. — Design principle · ProWorkbench
Bring your own backend because the model market moves too fast to be locked into one vendor's API. — Design principle · ProWorkbench
Direct answers

Questions developers actually ask before they install.

Is this really local?

Yes. ProWorkbench installs on your machine, stores data in a local SQLite database, and only talks to whatever AI backend you point it at. There is no hosted dashboard and no server-side account.

Do you provide AI models?

No. ProWorkbench is the workbench, not the model. You bring your own OpenAI-compatible backend — that includes OpenAI itself, Anthropic via a proxy, Ollama, LM Studio, vLLM, llama.cpp servers, or any other endpoint that speaks the OpenAI chat completions API.

Where is my data stored?

On your machine. Conversations, approvals, MCP configs, and tool history all live in a local SQLite file under your user directory. Nothing is synced to a ProWorkbench server because there is no ProWorkbench server.

What happens after I purchase?

You get an instant download link for your platform and a license key by email. Run the installer, paste the license, point it at your AI backend, and you are working. No account creation, no waiting list.

Refund policy?

30-day money-back guarantee. If ProWorkbench does not fit how you actually work, email support and we refund. No interrogation.

Which version do I choose?

The same ProWorkbench, packaged for Windows, macOS, or Linux. Pick the build that matches your OS on the platform page. License keys work on any platform you own.

Get started

Pick your platform and run the installer.

Native build for your OS, instant download after purchase, license key by email. Point it at your AI backend and you are working in minutes.

Choose your platform Instant download  ·  One-time purchase  ·  30-day guarantee