Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.enconvo.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Hermes Agent is an open-source AI agent from Nous Research. It runs as a local or server-hosted agent runtime with tools, memory, skills, terminal access, and messaging integrations. In EnConvo, Hermes Agent is used through its OpenAI-compatible API server. After the Hermes gateway is running, EnConvo can use Hermes like any other AI model in chat, EnConvo agents, and model-powered features.
New to Hermes? Start with the Hermes Agent Quickstart, then return here to connect it to EnConvo.

What You Need

  • Hermes Agent already installed and configured
  • Hermes gateway running with the API server enabled
  • The API server token from API_SERVER_KEY in ~/.hermes/.env
  • EnConvo with the Hermes Agent AI model provider enabled
Hermes exposes OpenAI-compatible endpoints such as /v1/models and /v1/chat/completions. EnConvo’s Hermes provider uses the Chat Completions API.

Get Base URL and API Key

Use this base URL when EnConvo runs on the same Mac as Hermes:
http://127.0.0.1:8642/v1
Read the API server token from ~/.hermes/.env:
awk -F= '/^API_SERVER_KEY=/{print $2}' ~/.hermes/.env
EnConvo sends this value as:
Authorization: Bearer <API_SERVER_KEY>
To test the OpenAI-compatible endpoint directly:
curl http://127.0.0.1:8642/v1/chat/completions \
  -H "Authorization: Bearer <API_SERVER_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "hermes-agent",
    "messages": [
      { "role": "user", "content": "Hello!" }
    ]
  }'
Use the base URL ending in /v1, not the full /v1/chat/completions endpoint.

Configure Hermes in EnConvo

Open EnConvo Settings -> AI Model -> Hermes Agent, then configure the credential provider. Hermes Agent credential settings in EnConvo
SettingValue
Hermes Base URLhttp://127.0.0.1:8642/v1
API Server KeyAPI_SERVER_KEY from ~/.hermes/.env
Click Validate after entering the base URL and token.

Use Hermes as an AI Model

Once configured, Hermes appears in the model selector. Choose Hermes Agent and select the advertised model, usually hermes-agent. Hermes Agent selected as the chat model Use this mode when you want an EnConvo conversation, command, or feature to run through Hermes Agent while keeping the normal EnConvo interface.
1

Open the model picker

In chat or another model-powered EnConvo feature, click the current model name.
2

Choose Hermes Agent

Select Hermes Agent in the provider list.
3

Select the Hermes model

Choose hermes-agent or the model name advertised by your Hermes API server.
4

Run the feature normally

Ask your question or run the EnConvo feature. Hermes handles the model call behind the scenes.
EnConvo loads Hermes models from the Hermes /v1/models endpoint. If Hermes is not running or the endpoint is unavailable, EnConvo falls back to hermes-agent.

Create an EnConvo Agent for Hermes

You can also create an EnConvo agent that coordinates Hermes Agent. This is useful when you want an EnConvo-facing workflow, tool set, or prompt around Hermes’ agent runtime.
1

Open Create Agent

Open SmartBar and search for Create Agent.
2

Name the agent

Create a clear title, such as Hermes Agent.
3

Write coordinator instructions

Give the EnConvo agent instructions that explain how it should work with Hermes.
You coordinate Hermes Agent for the user.

Use Hermes for agentic work, automation, terminal-backed tasks, and requests that should run through the user's Hermes setup.
Summarize what Hermes did, surface any blockers, and ask before destructive or irreversible actions.
4

Choose the Hermes model

In the agent’s model settings, select Hermes Agent and choose hermes-agent.
5

Save and use the agent

Start a conversation with the new EnConvo agent. It will use Hermes while retaining the EnConvo agent interface and tools you configured.

LAN Access

If Hermes is running on another machine, use the host machine’s LAN IP in EnConvo:
http://<hermes-host-lan-ip>:8642/v1
LAN access should use a strong API_SERVER_KEY. Only expose the Hermes API server on networks you trust.

Troubleshooting

Make sure the Hermes provider is enabled in Settings -> AI Model. If the model list is empty, confirm the gateway is running with hermes gateway status.
Recheck API_SERVER_KEY in ~/.hermes/.env. EnConvo expects this value and sends it as a Bearer token.
Confirm the API server is healthy with curl http://127.0.0.1:8642/v1/health. If EnConvo runs on another device, use the host’s LAN IP and make sure Hermes is bound for network access.
Enter http://<gateway-host>:8642/v1. Do not include /chat/completions in the EnConvo base URL field.
Read Hermes gateway logs with tail -f ~/.hermes/logs/gateway.log and errors with tail -f ~/.hermes/logs/gateway.error.log.