Skip to Content
DocumentationConfiguration

Configuration

Ways to provide configuration

There are multiple ways to provide configuration to Erato.

All sources are configuration are merged together into one, so configuration can be specified via e.g. erato.toml, as well as via environment variables.

As of now, there is no specified precedence order when it comes to values provided for the same configuration key that are provided in different sources.

erato.toml files

The erato.toml file is the preferred way to provide configuration for Erato. The file must be placed in the current working directory of the Erato process.

In the Helm chart, a secret from where the erato.toml file should be mounted can be specified via backend.configFile.

*.auto.erato.toml files

In addition to the main erato.toml file, Erato will also auto-discover all files matching the pattern *.auto.erato.toml in the current working directory.

This is useful if you e.g. want to split out all the secret values (LLM API keys, Database credentials) into a different file that is not checked into source control.

Environment variables

Configuration can also be provided via environment variables.

Though it is not recommended, values for nested configuration can also be provided via environment variables. In that case, each nesting level is separated by double underscores (__). E.g. CHAT_PROVIDER__BASE_URL is equivalent to chat_provider.base_url.

Configuration reference

🚧Work in progress; Covers ~80% of available configuration options 🚧

frontend

frontend.theme

The name of the theme to use for the frontend.

When provided, the theme must be part of the frontend bundle (usually located in the public directory), and placed in the custom-theme directory, under the name provided here. E.g. if frontend.theme is set to my-theme, the theme must be placed in public/custom-theme/my-theme.

See Theming for more information about themes and theme directory structure.

If not provided, the default bundled theme will be used.

Default value: None

Type: string | None

Example

[frontend] theme = "my-theme"

frontend.additional_environment

Additional values to inject into the frontend environment as global variables. These will be made available to the frontend Javascript, and added to the window object.

This is a dictionary where each value can be a string or a map (string key, string value).

This may be useful if you are using a forked version of the frontend, which you need to pass some configuration to.

Default value: None

Type: object<string, any>

Example

[frontend] additional_environment = { "FOO": "bar" }

This will be inejcted into the frontend as:

window.FOO = "bar";

chat_provider

Configuration for the chat provider (LLM) that Erato will use for generating responses.

chat_provider.provider_kind

The type of chat provider to use.

Type: string

Supported values: "openai", "azure_openai"

Example: "openai"

chat_provider.model_name

The name of the model to use with the chat provider.

Type: string

Example: "gpt-4o"

chat_provider.base_url

The base URL for the chat provider API. If not provided, will use the default for the provider.

Type: string | None

Example: "https://api.openai.com/v1/", "http://localhost:11434/v1/"

chat_provider.api_key

The API key for the chat provider.

Type: string | None

Example: "sk-..."

chat_provider.system_prompt

A static system prompt to use with the chat provider. This sets the behavior and personality of the AI assistant.

Type: string | None

Note: This option is mutually exclusive with system_prompt_langfuse.

Example:

[chat_provider] provider_kind = "openai" model_name = "gpt-4" system_prompt = "You are a helpful assistant that provides concise and accurate answers."

chat_provider.system_prompt_langfuse

Configuration for using a system prompt from Langfuse prompt management instead of a static prompt.

Type: object | None

Note: This option is mutually exclusive with system_prompt. Requires the Langfuse integration to be enabled.

Properties:

  • prompt_name (string): The name of the prompt in Langfuse to use

Example:

[chat_provider] provider_kind = "openai" model_name = "gpt-4" system_prompt_langfuse = { prompt_name = "assistant-prompt-v1" } [integrations.langfuse] enabled = true base_url = "https://cloud.langfuse.com" public_key = "pk-lf-..." secret_key = "sk-lf-..."

integrations

Configuration for external service integrations.

integrations.langfuse

Configuration for the Langfuse observability and prompt management integration.

integrations.langfuse.enabled

Whether the Langfuse integration is enabled.

Default value: false

Type: boolean

integrations.langfuse.base_url

The base URL for your Langfuse instance. Use https://cloud.langfuse.com for Langfuse Cloud or your self-hosted URL.

Required when enabled: Yes

Type: string

Example: "https://cloud.langfuse.com" or "https://langfuse.yourcompany.com"

integrations.langfuse.public_key

Your Langfuse project’s public key. You can find this in your Langfuse project settings.

Required when enabled: Yes

Type: string

Example: "pk-lf-1234567890abcdef"

integrations.langfuse.secret_key

Your Langfuse project’s secret key. You can find this in your Langfuse project settings.

Required when enabled: Yes

Type: string

Example: "sk-lf-abcdef1234567890"

integrations.langfuse.tracing_enabled

Whether to enable detailed tracing of LLM interactions. When enabled, all chat completions will be logged to Langfuse.

Default value: false

Type: boolean

See the Langfuse Integration documentation for detailed usage instructions and feature descriptions.

integrations.sentry

Configuration for the Sentry error reporting and performance monitoring integration.

integrations.sentry.sentry_dsn

The Sentry DSN (Data Source Name) for your Sentry project. This enables error reporting and performance monitoring.

Default value: None

Type: string | None

Example:

[integrations.sentry] sentry_dsn = "https://public@sentry.example.com/1"

See the Sentry Integration documentation for detailed setup instructions.

mcp_servers

Configuration for Model Context Protocol (MCP) servers that extend Erato’s capabilities through custom tools and integrations.

Type: object<string, McpServerConfig>

Example:

[mcp_servers.file_provider] transport_type = "sse" url = "http://127.0.0.1:63490/sse" [mcp_servers.filesystem] transport_type = "sse" url = "https://my-mcp-server.example.com/sse" http_headers = { "Authorization" = "Bearer token123", "X-API-Key" = "key456" }

mcp_servers.<server_id>.transport_type

The type of transport protocol used to communicate with the MCP server.

Type: string

Supported values: "sse" (Server-Sent Events)

Example: "sse"

mcp_servers.<server_id>.url

The URL endpoint for the MCP server. For SSE transport, this conventionally ends with /sse.

Type: string

Example: "http://127.0.0.1:63490/sse", "https://my-mcp-server.example.com/sse"

mcp_servers.<server_id>.http_headers

Optional static HTTP headers to include with every request to the MCP server. This is useful for authentication or API keys.

Type: object<string, string> | None

Example:

[mcp_servers.authenticated_server] transport_type = "sse" url = "https://secure-mcp-server.example.com/sse" http_headers = { "Authorization" = "Bearer your-token", "X-API-Key" = "your-api-key" }

See the MCP Servers documentation for more information about Model Context Protocol integration.

Last updated on