Alumnium v0.20 is one of our largest releases yet, bringing an architectural rewrite along with many new features - notably a new Codex model provider and an upgrade of the Ollama model to Qwen 3.6. It’s such an important release that we’ve skipped v0.19 completely!
This release was made both for PyPI and npm packages along with a Docker image for Alumnium Server.
New Architecture
Alumnium was originally written as a Python package using LangChain for all AI-related logic. Eventually, TypeScript support was added, which depended on Alumnium Server - a Docker container that ships the Python package internally. This was a requirement because we didn’t want to duplicate the LLM core logic across both clients. Later, we also added an MCP server to the Python package, with the recommended installation method via uvx.
However, the idea of centralizing Alumnium’s core logic into a Docker container brought a bunch of problems. First, it’s extra infrastructure that needs to be maintained. Second, Docker container and client versions must be kept in sync (we’re still a 0.x project, so backwards compatibility sometimes breaks). Third, many organizations don’t favor Docker for everything and would like to avoid pulling in extra paid seats for every developer.
It felt like a different solution was needed. What we ended up with was a complete architectural rewrite of Alumnium. We started by porting all core logic from Python to TypeScript. This required using LangChain.js. Migrating was mostly seamless from the Python version, but it had its own quirks, particularly related to missing types. We wrote our own type-safety layer for it, which generates types based on real LLM responses. Finally, we moved the MCP server to TypeScript as well.
Once the main pieces were rewritten, we used Bun to turn our TypeScript code into binaries. Bun has a single-file executable feature that bundles all source code along with the Bun runtime into one compact binary. It also allows for cross-compilation, so we could cover all major operating systems (Linux, macOS, Windows) and architectures (x64, arm64). It feels almost magical to see how TypeScript code can become a portable binary that needs no dependencies!
Compiled binary contains both Alumnium HTTP and MCP servers inside. Instead of clients depending on a Docker container, they can now run a binary to start the server (alumnium server). To make it completely seamless for users, we’ve created standalone packages for PyPI and npm containing platform-specific binaries, which the main packages depend on. This ensures that when you install Alumnium via pip or npm, it automatically pulls in the proper version of the binary. You no longer have to run the Docker container or worry about client and server versions going out of sync.
Centralizing with a binary brought many other benefits. The Docker container for the server became much smaller (77MB vs. 650MB) since it no longer needs the Python runtime. The MCP server can be installed directly by downloading the proper binary version (curl -LsSf https://alumnium.ai/install.sh | sh), though installation via uvx/npx is still possible. The binary approach also opened the possibility of working on a CLI (in addition to the MCP) that can be used via an agent skill, which we are currently developing.
Huge shoutout to Sasha Koss, who handled a massive amount of work on the TypeScript rewrite!
Codex and Qwen Models
Codex is now an officially supported provider in Alumnium. Since OpenAI still allows using Codex subscriptions for third-party integrations, we’ve written a langchain-codex integration. It automatically handles authentication, so if you have a ChatGPT Plus/Pro subscription, you can reuse it for Alumnium calls without an API key. The default model used is gpt-5.4-mini, but you can change it as needed.
codex mcp add alumnium --env ALUMNIUM_MODEL=codex -- alumnium mcpIf you prefer local models, we’ve switched the Ollama provider from Mistral Small 3.1 to Qwen 3.6. The latter has native vision, tool calling, and - most importantly - a thinking mode, bringing it on par with Anthropic, Google, and OpenAI providers.
Check out the documentation for details on how to start using these new models.
OpenTelemetry
If you run Alumnium at scale, you need good observability in place. Alumnium now ships with built-in OpenTelemetry support. It automatically creates traces for each session, with spans capturing individual driver and LLM calls. This allows you to dive deeper into why certain things are slow or not working as expected. Logs are also automatically exported to OpenTelemetry backends, so you can use tools like Grafana to get all the observability information you need.
To enable OpenTelemetry, export the following environment variables:
export ALUMNIUM_TRACE="true"export OTEL_SERVICE_NAME="alumnium"export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318" # change as neededexport OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf" # change as neededMCP Changes
Many improvements were made in the MCP server:
- Removed “driver” terminology. You should now use
start/stopinstead ofstart_driver/stop_driverandidinstead ofdriver_id. - All MCP tool responses are now in JSON, allowing for easier parsing in coding assistant hooks.
- Playwright browsers are automatically installed at the start of a session if they are missing, bringing it on par with Selenium.
- Capabilities in the
starttool call can now be passed as a file path. cookiesandheaderscapabilities were moved toalumnium:options.driverSettingswere removed. Instead, any supported setting can be passed directly toalumnium:options.- A
baseUrlfield was added toalumnium:optionsto navigate to a specific page on session start (supported across all drivers). - A
headlessfield was added toalumnium:optionsto toggle between headless and headed modes for both Playwright and Selenium. - A
profilefield was added toalumnium:optionsto create and reuse persisted Chrome profiles across multiple runs (supported for Playwright and Selenium).
Check out the documentation for all MCP server details.
Coming Next
Now that the foundation is solid, we’re ready to ship even more features. We are focusing on the following areas:
- New clients. There is already a PR with basic Java support written by Ish Abbi, which we hope to release in the coming weeks.
- CLI. We have a basic version of a CLI skill working, allowing users to use Alumnium with coding assistants without the MCP server.
- Plugins. We’d like to add Claude Code and Codex plugins so Alumnium can be easily used for end-to-end testing during development with these agents.
- Full-featured test runner. We are exploring the possibility of creating a test runner that can take free-form text and execute it with close-to-native performance and auto-healing. This would allow for building cross-platform tests without writing a single line of code.
Happy testing!