llm
Agent-readyLocal AIAIPrompting, local plugins, and structured llm outputs from the terminal.
Prompting, local plugins, and structured llm outputs from the terminal. Built by Simon Willison. Supports structured output — good for scripts and agents.
Task fit
prompting, local plugins, and structured llm outputs from the terminal.
Lane
Set up coding agents, local models, and AI-first terminal workflows.
Operator brief
Use llm for prompting, local plugins, and structured llm outputs from the terminal.
Run `llm 'Explain this command: rg TODO src'` and see what comes back.
Repository family
Simon Willison
First trust check
The llm CLI responds and is ready for prompts.
Safe first loop
Install, verify, then run one real command.
Agent capability loop
Install command
$ pipx install llmOperator pack
Copy or export the working notes for this CLI before handing it to an agent.
Verify
$ llm --helpThe llm CLI responds and is ready for prompts.
First real command
$ llm 'Explain this command: rg TODO src'First steps
- 01Install llm.
- 02Run `llm --help` first.
- 03Start with `llm 'Explain this command: rg TODO src'`.
- 04Authenticate llm before asking the agent to do real work.
When to use / hold off when
Best for
prompting, local plugins, and structured llm outputs from the terminal.
Use this when
You want AI models and inference you can script with structured output.
Hold off when
Trust and constraints
Why operators pick it
- llm fits local ai well, especially for prompting, local plugins, and structured llm outputs from the terminal.
- 1,418 homebrew installs (30d).
- Good for scripts and agents.
Constraints
- Sign in before real work.
- Needs network access.
Repository context
Other CLIs in this family
sqlite-utils
SQLite automation, csv/json imports, and data transforms from the terminal.