Skip to content
GitHubDiscordSlack

Configuration

Alumnium needs access to an AI model to work. The following models are supported:

ProviderModel
AnthropicClaude 4.5 Haiku
GitHubGPT-4o Mini
GoogleGemini 2.0 Flash
OpenAI (default)GPT-4o Mini
DeepSeekDeepSeek V3
MetaLlama 4 Maverick 17B
MistralAIMistral Medium 3
OllamaMistral Small 3.1 24B
xAIGrok 4 Fast

These models were chosen because they provide the best balance between intelligence, performance, and cost. They all behave roughly the same in Alumnium tests.

To use Anthropic as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="anthropic"
export ANTHROPIC_API_KEY="sk-ant-..."

To use GitHub Models AI provider with OpenAI in Alumnium:

  1. Get the personal access token.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="github"
export OPENAI_API_KEY="github_pat_..."

To use Google AI Studio as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="google"
export GOOGLE_API_KEY="..."

To use OpenAI as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="openai"
export OPENAI_API_KEY="sk-proj-..."

To use DeepSeek as an AI provider in Alumnium:

  1. Set up a DeepSeek Platform account.
  2. Get the API key.
  3. Export the following environment variable before running tests:
Terminal window
export ALUMNIUM_MODEL="deepseek"
export DEEPSEEK_API_KEY="sk-..."

To use Meta Llama as an AI provider in Alumnium:

  1. Set up an Amazon Bedrock account.
  2. Enable access to Llama 4 Maverick models.
  3. Get the access key and secret.
  4. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="aws_meta"
export AWS_ACCESS_KEY="..."
export AWS_SECRET_KEY="..."

To use MistralAI as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environemnt variables before running testes:
Terminal window
export ALUMNIUM_MODEL="mistralai"
export MISTRAL_API_KEY="..."

To use Ollama for a fully local model inference:

  1. Download and install Ollama.
  2. Download Mistrall Small 3.1 24B model:
Terminal window
ollama pull mistral-small3.1:24b
  1. Export the following environment variable before running tests:
Terminal window
export ALUMNIUM_MODEL="ollama"
export ALUMNIUM_OLLAMA_URL="..." # if you host Ollama on a server

To use xAI as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environemnt variables before running testes:
Terminal window
export ALUMNIUM_MODEL="xai"
export XAI_API_KEY="xai-..."

Read next to learn how to write tests!