Assemble domain-specific AI from a library of frozen specialist modules. Each module is independently trained on a single domain. The system automatically selects the optimal modules for each input.
Modulith
Composable AI. Provable Control.
We build AI systems from independently trained frozen specialist modules — enabling provable capability control, instant customisation, and regulatory compliance.
Add or remove capabilities by adding or removing module files. No retraining required. Removing a module removes its capability. Adding it back restores it. Instantly.
Verify that a specific capability is present or absent. Audit-ready. Designed for EU AI Act compliance and regulatory frameworks requiring demonstrable AI governance.
Research
Compositional Scaling Through Independent Frozen Modules
We present an architecture for language model composition where independently trained, frozen specialist modules are dynamically selected and combined at inference time. A lightweight routing network selects a sparse subset of modules per token position, enabling the system to scale knowledge capacity independently of inference cost. We demonstrate that composed modules outperform individual specialists, that cross-domain composition improves all domains, and that capabilities can be verifiably added or removed by adding or removing module files.
44 specialist modules trained independently across 15 training runs. Each module sees only its own domain data. No module has access to any other module's weights or gradients.
A lightweight routing network (~200K parameters) automatically selects the optimal modules per token position. Code prompts activate code specialists. Mathematical prompts activate instruction-following specialists. No manual routing required.
The composed system outperforms any individual module. Modules trained on unrelated domains improve each other through composition — instruction-following modules measurably improve code generation quality.
The system generates coherent multi-domain text from composed frozen modules. No individual module can generate readable text alone — composition creates capability that no individual component possesses.
Removing a module file removes its capability from the system. Re-adding the file restores the capability. No retraining, no fine-tuning, no gradient computation. Verifiable by file-system inspection.
Total Phase 1 training cost: under $100. Each new domain capability costs approximately $3 to add. The system scales by adding modules, not by retraining existing ones.
Progress
Traditional language models activate every parameter for every token — a 1:1 knowledge-to-compute ratio. Modulith's architecture scales knowledge capacity independently of inference cost. At 1,000 modules, the ratio reaches 200:1. At 10,000 modules: 2,000:1.
| Domain | Modules | Status |
|---|---|---|
| General text | 7 | Complete |
| Code | 3 | Complete |
| Mathematics | 4 | Complete |
| Dialogue | 5 | Complete |
| Scientific | 4 | Complete |
| Legal | 4 | Complete |
| Medical | 5 | Complete |
| Creative | 4 | Complete |
| Instructions | 8 | Complete |
| Total | 44 | Phase 1 |
44 modules, 15 domains, under $100. Cross-domain composition, automatic routing, coherent generation, and capability control demonstrated.
200+ modules. Benchmark evaluation against established models. Specialised professional domains.
1,000+ modules. Domain-specific model derivation. On-device inference. Formal safety verification.
The architecture is modality-agnostic. Same modules, different interface layers.
Modulith Research CIC — advancing AI safety through open research into modular AI architectures with provable safety properties.