Getting Started¶
Installation¶
Download the latest binary for your platform from
GitHub Releases and place
it somewhere on your $PATH:
First-Time Setup¶
Just run smelt. It will create ~/.config/smelt/config.yaml and you're ready
to go.
You can also skip the wizard and connect directly with CLI flags.
Local Models (Ollama)¶
Any server that speaks the OpenAI chat completions API works: Ollama, vLLM, SGLang, llama.cpp.
Cloud Providers¶
No API key needed — authenticate with your ChatGPT Pro/Plus subscription:
The Codex provider uses OAuth to connect to your ChatGPT subscription. Tokens are stored locally and refreshed automatically.
Writing a Config File¶
Once you have a setup you like, save it to ~/.config/smelt/config.yaml so you
don't need CLI flags every time:
providers:
- name: ollama
type: openai-compatible
api_base: http://localhost:11434/v1
models:
- qwen3.5:27b
- name: openai
type: openai
api_base: https://api.openai.com/v1
api_key_env: OPENAI_API_KEY
models:
- gpt-5.4
- name: anthropic
type: openai-compatible
api_base: https://api.anthropic.com/v1
api_key_env: ANTHROPIC_API_KEY
models:
- claude-opus-4-6
defaults:
model: ollama/qwen3.5:27b # provider_name/model_name
Now just run smelt — it connects to your default model automatically. Switch
models at runtime with /model. See the
Configuration Reference for all options.
Next Steps¶
- Usage Guide — modes, tools, sessions, and the full daily workflow
- Customization — themes, settings, custom commands
- CLI Reference — all command-line flags