~/.opsh/config.json is valid JSON.
Opsh doesn't start automatically when I open a terminal
Opsh doesn't start automatically when I open a terminal
Opsh relies on a shell integration block added to your shell rc file during installation. If it’s missing or your PATH isn’t set correctly, Opsh won’t be available automatically.
- Verify that
~/.opsh/binis present: runls ~/.opsh/bin. - Check that
~/.opsh/binis in yourPATH: runecho $PATHand look for it. - Re-run the install script to restore the shell integration:
- Check that the environment variable
OPSH_DISABLE_AUTOis not set in your shell environment. If it is, Opsh’s auto-start is suppressed — unset it or remove it from your rc file. - Open a new terminal after making any changes.
"API key is not configured" error
"API key is not configured" error
Opsh needs a valid API key for your chosen AI provider before it can generate any commands.
- Run
opsh --initto go through the setup wizard and enter your provider and API key interactively. - Alternatively, open
~/.opsh/config.jsonin a text editor and set theapiKeyfield under your provider’s configuration.
Provider request failed (HTTP error)
Provider request failed (HTTP error)
An HTTP error from your provider usually means a configuration problem or a connectivity issue.
- Confirm your API key is correct and has not expired. Generate a new key from your provider’s dashboard if needed.
- Check your internet connection.
- Open
~/.opsh/config.jsonand verify that thebaseUrlfield matches your provider’s API endpoint exactly. - If you’re using Ollama for local inference, make sure Ollama is running:
ollama serve.
The generated command doesn't do what I asked
The generated command doesn't do what I asked
The AI model may have interpreted your request differently than you intended. You have a few options at the confirmation prompt:
- Press r to regenerate the command with a safer or simpler interpretation, keeping your original request.
- Press e to open an edit prompt and modify the command manually before running it.
- Cancel with n and rephrase your request more specifically — for example, include the exact file name, directory, or flags you need.
- If Opsh isn’t using enough context from your recent history, increase
recentContextLimitin~/.opsh/config.jsonto give the model more shell history to work with.
Warp mode auto-runs a command I wanted to review
Warp mode auto-runs a command I wanted to review
Warp mode automatically runs commands classified as safe. If you want to review everything before it executes, turn warp mode off.
- In the REPL, type
!warpto toggle warp mode off for the current session. - To disable it permanently, open
~/.opsh/config.jsonand set"warpMode": false.
I want to run a raw shell command without AI
I want to run a raw shell command without AI
You can bypass AI generation and send input directly to your shell in two ways:
- In the REPL, type
!cmdto toggle raw command mode. While active, everything you type is passed directly to the shell without going through the AI. - For one-off raw commands, use your normal shell directly instead of Opsh’s one-shot mode —
opsh "!cmd ..."is not supported in one-shot.
Opsh is slow or has high latency
Opsh is slow or has high latency
Response time depends entirely on the AI provider and model you’ve configured.
- Switch to a faster model. Good options for low latency include Gemini 2.5 Flash (
gemini-2.5-flash), Claude Haiku 3.5 (claude-3-5-haiku-20241022), or GPT-5 mini (gpt-5-mini). - If you use OpenRouter, consider its auto-router, which routes requests to a fast available model automatically.
- For zero-latency, fully local inference, configure Opsh to use Ollama with a model running on your machine.

