Skip to main content
If something isn’t working as expected, the issues below cover the most common causes and how to resolve them. If your problem isn’t listed here, check that you’re on the latest version of Opsh and that your config file at ~/.opsh/config.json is valid JSON.
Opsh relies on a shell integration block added to your shell rc file during installation. If it’s missing or your PATH isn’t set correctly, Opsh won’t be available automatically.
  1. Verify that ~/.opsh/bin is present: run ls ~/.opsh/bin.
  2. Check that ~/.opsh/bin is in your PATH: run echo $PATH and look for it.
  3. Re-run the install script to restore the shell integration:
    curl -fsSL https://opsh.dxu.one/install.sh | bash
    
  4. Check that the environment variable OPSH_DISABLE_AUTO is not set in your shell environment. If it is, Opsh’s auto-start is suppressed — unset it or remove it from your rc file.
  5. Open a new terminal after making any changes.
Opsh needs a valid API key for your chosen AI provider before it can generate any commands.
  • Run opsh --init to go through the setup wizard and enter your provider and API key interactively.
  • Alternatively, open ~/.opsh/config.json in a text editor and set the apiKey field under your provider’s configuration.
An HTTP error from your provider usually means a configuration problem or a connectivity issue.
  1. Confirm your API key is correct and has not expired. Generate a new key from your provider’s dashboard if needed.
  2. Check your internet connection.
  3. Open ~/.opsh/config.json and verify that the baseUrl field matches your provider’s API endpoint exactly.
  4. If you’re using Ollama for local inference, make sure Ollama is running: ollama serve.
The AI model may have interpreted your request differently than you intended. You have a few options at the confirmation prompt:
  • Press r to regenerate the command with a safer or simpler interpretation, keeping your original request.
  • Press e to open an edit prompt and modify the command manually before running it.
  • Cancel with n and rephrase your request more specifically — for example, include the exact file name, directory, or flags you need.
  • If Opsh isn’t using enough context from your recent history, increase recentContextLimit in ~/.opsh/config.json to give the model more shell history to work with.
Warp mode automatically runs commands classified as safe. If you want to review everything before it executes, turn warp mode off.
  • In the REPL, type !warp to toggle warp mode off for the current session.
  • To disable it permanently, open ~/.opsh/config.json and set "warpMode": false.
You can bypass AI generation and send input directly to your shell in two ways:
  • In the REPL, type !cmd to toggle raw command mode. While active, everything you type is passed directly to the shell without going through the AI.
  • For one-off raw commands, use your normal shell directly instead of Opsh’s one-shot mode — opsh "!cmd ..." is not supported in one-shot.
Response time depends entirely on the AI provider and model you’ve configured.
  • Switch to a faster model. Good options for low latency include Gemini 2.5 Flash (gemini-2.5-flash), Claude Haiku 3.5 (claude-3-5-haiku-20241022), or GPT-5 mini (gpt-5-mini).
  • If you use OpenRouter, consider its auto-router, which routes requests to a fast available model automatically.
  • For zero-latency, fully local inference, configure Opsh to use Ollama with a model running on your machine.