Models and Authentication
The stack will not be useful until it can authenticate users and talk to at least one model provider.
MODEL_CONFIGS
- Decide which providers you want to use (OpenAI-compatible, Anthropic, Gemini, etc.).
- For each provider, collect the API key and base URL if the provider requires a custom endpoint.
- Build a JSON array describing each model, e.g.:
[
{
"provider": "openai",
"model": "gpt-4o-mini",
"apiKey": "sk-your-key",
"baseUrl": "https://api.openai.com/v1",
"maxRetries": 3
}
] - Paste the serialized JSON blob into
MODEL_CONFIGSindocker/.stack.env.
The current example uses:
MODEL_CONFIGS='[{"model_id":"gpt-5-mini","provider":"OpenAI","api_key":"replace-me","display_name":"GPT-5 Mini","is_default":true}]'
RESEARCHER_AGENT_CONFIG
- This config defines specialized model roles for research-oriented workflows.
- The example file includes a minimal placeholder object so the env surface is present even before you fully wire research.
JWT and session secrets
Required values:
JWT_SECRET_KEYSESSION_SECRET_KEYAUTH_SECRET_KEYACCESS_TOKEN_EXPIRE_MINUTES
Google OAuth variables
GOOGLE_CLIENT_IDGOOGLE_CLIENT_SECRETGOOGLE_REDIRECT_URI
Populate these only when you are actively testing Google sign-in.