Skip to content

Configuration

Alumnium needs access to an AI model to work. The following models are supported:

ProviderModel
AnthropicClaude 3 Haiku
GoogleGemini 2.0 Flash
OpenAI (default)GPT-4o Mini
DeepSeekDeepSeek V3
MetaLlama 3.2 90B
OllamaMistral Small 3.1 24B

These models were chosen because they provide the best balance between intelligence, performance, and cost. They all behave roughly the same in Alumnium tests.

To use Anthropic as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="anthropic"
export ANTHROPIC_API_KEY="sk-ant-..."

To use Google AI Studio as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="google"
export GOOGLE_API_KEY="..."

To use OpenAI as an AI provider in Alumnium:

  1. Get the API key.
  2. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="openai"
export OPENAI_API_KEY="sk-proj-..."

To use DeepSeek as an AI provider in Alumnium:

  1. Set up a DeepSeek Platform account.
  2. Get the API key.
  3. Export the following environment variable before running tests:
Terminal window
export ALUMNIUM_MODEL="deepseek"

To use Meta Llama as an AI provider in Alumnium:

  1. Set up an Amazon Bedrock account.
  2. Enable access to Llama 3.2 models.
  3. Get the access key and secret.
  4. Export the following environment variables before running tests:
Terminal window
export ALUMNIUM_MODEL="aws_meta"
export AWS_ACCESS_KEY="..."
export AWS_SECRET_KEY="..."

To use Ollama for a fully local model inference:

  1. Download and install Ollama.
  2. Download Mistrall Small 3.1 24B model:
Terminal window
ollama pull mistral-small3.1:24b
  1. Export the following environment variable before running tests:
Terminal window
export ALUMNIUM_MODEL="ollama"

Read next to learn how to write tests!