Hermes Agent by Nous Research is one of the most practical open-source AI agents available today. Unlike most AI tools that forget everything when you close the terminal, Hermes features a built-in learning loop — it creates reusable skills from successful workflows and persists memory across sessions.
In this tutorial, you will go from zero to a running Hermes Agent connected to your preferred AI model, with messaging gateway integration and your first self-created skill.
Prerequisites
Before you begin, check that your system meets these requirements:
Operating System:
- Linux (Ubuntu 20.04+, Debian, Fedora, Arch)
- macOS (Intel or Apple Silicon)
- Windows via WSL2 (native Windows is not supported)
Hardware:
- 4 GB RAM minimum (8 GB+ recommended)
- 2 GB free disk space
- For local models: GPU with 8-16 GB+ VRAM
AI Model Access (pick one):
- API key from OpenRouter, OpenAI, or Anthropic
- Ollama installed locally for free, offline usage
Important: Hermes requires a model with at least 64,000 tokens of context. Models with smaller context windows cannot maintain enough working memory for multi-step workflows.
Step 1: Install Hermes Agent
Open your terminal and run the one-line installer:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
This command handles everything automatically:
- Detects your OS and installs missing dependencies
- Clones the repository to
~/.hermes - Creates a Python virtual environment
- Registers the global
hermescommand - Launches the setup wizard for LLM provider configuration
Once finished, reload your shell:
source ~/.bashrc # or source ~/.zshrc on macOS
Verify the installation:
hermes --version
If you encounter any issues, run the built-in diagnostics:
hermes doctor
Step 2: Choose & Configure Your AI Model
During installation, the setup wizard prompts you to pick an LLM provider. You can also change it later:
hermes model
Option A: OpenRouter (200+ Models)
OpenRouter gives you access to over 200 models through a single API key.
- Sign up at openrouter.ai and get your API key
- Select "OpenRouter" in the setup wizard
- Paste your API key and choose a model (Claude Sonnet or GPT-4o are solid choices)
Option B: Ollama (Free, Local, Offline)
This is the zero-cost option with no data leaving your machine.
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh - Pull a model:
ollama pull gemma4 - In the Hermes wizard, select "Custom endpoint" and set URL to
http://127.0.0.1:11434/v1
When running locally, set the context size to at least 64K:
ollama run gemma4 --ctx-size 65536
Provider Fallback Chains
Hermes supports ordered fallback chains. Configure this in ~/.hermes/config.yaml:
fallback_providers:
- provider: openrouter
model: anthropic/claude-sonnet
- provider: ollama
model: gemma4
Step 3: Set Up Telegram or Discord Bot
The messaging gateway lets you interact with your agent through Telegram, Discord, Slack, WhatsApp, and more.
Telegram Setup
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Copy the bot token
- Run
hermes gateway setup, select Telegram, and paste your token
Restrict access to only your account:
export TELEGRAM_ALLOWED_USERS=YOUR_USER_ID
Discord Setup
- Go to the Discord Developer Portal
- Create a new application and add a bot
- Copy the bot token
- Run
hermes gateway setupand paste your token
Start the Gateway
Run in the foreground to test:
hermes gateway
Then install as a persistent service:
hermes gateway install
What's Next?
Now that your agent is running, try these practical projects:
- Personal research assistant — Monitor RSS feeds and deliver daily briefings
- Code review bot — Connect the GitHub MCP server to review pull requests
- DevOps monitor — Set up cron jobs to check server health and SSL certificates
- Data pipeline automation — Fetch, clean, and transform data from APIs on a schedule
Start with the basics, let the agent learn your workflows, and scale up from there.
