Skip to content

Alumnium v0.9 with Ollama support

Published by Alex Rodionov's avatar Alex Rodionov
release notes

It’s been a while since our last release due to vacations, but we’re now catching up and v0.9.0 was published yesterday.

The biggest highlight is support for a completely local model inference using Mistral Small 3.1 24B and Ollama. This means that if you have a decent hardware (e.g. RTX 4090), you can run Alumnium fully on your machine, without the need for cloud-based AI providers! Check out the docs! 🦙

Apart from that, we’ve done a lot of work to make the project more welcoming for new contributors, improved CI and gardened a code for the upcoming features - improving stability and performance with caching LLM responses, exploring Appium support and others. Stay tuned and join our Discord!