Qwen 3
- Minimum hardware
- Basic tier
- Backend
- Ollama or LM Studio
Apache 2.0
Official pageInnerZero runs open-source language models directly on your PC via Ollama or LM Studio. The main families are Qwen 3, Gemma 3, and gpt-oss, covering everything from a 1B voice model on a basic laptop to a 235B reasoning model on a datacenter-class workstation. Local AI is free forever, no subscription, no account required.
Cloud support is optional. If you want frontier-quality models for hard tasks, subscribe to InnerZero's managed cloud plans, or bring your own API keys for any of seven providers with zero markup. Keys stay encrypted on your machine.
Installed automatically based on your hardware tier. These run entirely on your machine, offline if you want.
Apache 2.0
Official pageGemma Terms of Use
Official pageApache 2.0
Official pageSupported local families run reliably on consumer hardware, ship under permissive licences, are actively maintained by their upstream teams, and work cleanly through Ollama or LM Studio without custom glue. We prefer families with a range of sizes so the same recipe scales from a budget laptop up to a frontier workstation. Models that satisfy those constraints land in the default hardware tiers and get tested as new versions ship.
This section is optional. Use these only if you choose to add an API key or subscribe to a managed cloud plan. Managed plans bundle multiple providers under one bill; BYO sends your prompts directly to the provider you configure.
Model support last reviewed 19 April 2026. Pricing is set by each provider and changes over time; follow the pricing links above for current rates. Local models are free forever.