Technical perspective

CLI vs MCP for AI agents

CLIs are the easiest way to give an agent full access to an API.

1. On-demand discovery, zero context overhead

No schemas loaded upfront. Agents self-discover commands by running --help and pull only what they need. Help text is bundled directly with the CLI, so agents fetch it at runtime with no context overhead.

2. One command to install, runs everywhere

A single static binary with zero dependencies. One curl | bash to install. Works in any sandbox, CI runner, container, or remote environment — no runtimes, no MCP configuration.

3. Composable — take multiple actions in one step

cli contacts list --tag churning --output json | jq -r '.[].owner_id' | sort -u | xargs -I{} cli tasks create --assignee {} --title "Follow up with churning contacts" --due 3d

Pipes, redirection, and scripting let agents take multiple actions in one step. Fewer turns, fewer tokens, faster results.

4. Bash is native to LLMs

LLMs have been trained on decades of shell usage across open source repos, man pages, and technical writing. Reading --help output, passing flags, and parsing JSON responses is well-worn territory — no special integration needed, and it works across every model and provider, even cheap ones.

5. Nothing new to maintain

No new protocol to learn, no server to spin up, and no specification changes to keep up with. A CLI is a static binary — ship it and move on.

InstantCLI generates agent-ready CLIs from any API docs or OpenAPI spec — with rich help text, auto-updates, cross-platform binaries, and on-demand discovery built in.

Get started