Assemble domain-specific AI from a library of frozen specialist modules. Each module is independently trained. The system automatically selects the optimal modules for each input position. Add capabilities by adding modules. No retraining.
Modulith
Composable AI. Provable Control.
We build AI systems from independently trained frozen specialist modules — enabling provable capability control, full attribution, and regulatory compliance by construction.
Remove a capability by removing its module. Add it back — it returns. Instantly. No retraining, no fine-tuning, no gradient computation. Full per-position attribution trail shows exactly which module contributed what.
Mathematically verify that a capability is present or absent. Formal proofs in Lean 4. Designed for EU AI Act compliance, GDPR data removal, and regulatory frameworks requiring demonstrable AI governance.
Research
Formal Verification of Modular Compositional AI Systems
We investigate six safety properties that become structurally tractable in AI systems built from independently trained, permanently frozen, sparsely composed modules: attribution, guaranteed forgetting, non-degradation, cross-user isolation, bounded compute, and compositional monotonicity. We present empirical evidence from a production-scale system and outline a formal proof programme targeting machine-verified certificates in Lean 4.
Schulte-Zurhausen, K.D. (2026). "Open Problems in Formal Verification of Modular Compositional AI Systems." Modulith Research CIC.
The assessment methodology is deployed in production at Modulith Lab, where the same composition architecture evaluates frontier AI models against 24 formal properties.
Every output position decomposes exactly into per-module contributions via the composition graph. Routing weights provide a complete attribution trail at inference time.
Removing a module provably eliminates all associated knowledge. Empirically validated: removal residual below 1% of pre-removal capability. Output returns to exact baseline.
Adding modules cannot corrupt existing capabilities. Empirically validated: degradation bound below 2% across all tested configurations. Zero regression on module addition.
Inference cost is O(K) regardless of total pool size. A system with 100 modules and one with 10,000 modules run at identical speed. Formally proven.
User-specific modules have zero influence on other users. Architectural guarantee from independent training and frozen weights — no shared gradient, no information leakage.
Remove any module — output returns to exact baseline. Measured difference: 0.000000. Impossible in monolithic systems where knowledge, capability, and style are entangled.
Every compliance predicate in the Modulith assessment framework is defined as a Lean 4 formal specification. Lean 4 is an interactive theorem prover developed at Microsoft Research. When it accepts a proof, the result has been mechanically checked by a trusted kernel. No human judgment in the verification step. The following four theorem categories guarantee the integrity of every published assessment.
Every compliance predicate can be mechanically evaluated. Given a specification and a measurement, the answer is always PASS or FAIL — never undefined, never ambiguous. This eliminates the class of error where two evaluators disagree because the scoring rule is vague.
If a model passes a stricter threshold, it automatically passes any looser threshold. This prevents the paradox where tightening a standard could flip a result from FAIL to PASS — a class of bug that is impossible to detect without formal verification.
The overall assessment decomposes into independent sub-assessments. Passing Article 15 means passing accuracy AND robustness AND cybersecurity — this conjunction is proven, not assumed. Each sub-check is independently verifiable.
A model that abstains on uncertain queries (“I don’t know”) cannot lower its measured accuracy by doing so. Honest uncertainty is never penalised — a critical property for deployment in regulated contexts.
Full predicate definitions and theorem statements are published in the formal specifications repository. Proof implementations are available under commercial license. The Lean 4 kernel independently verifies every assessment — if the Python scoring and the Lean verification disagree on any property, the report is blocked from publication.
Schulte-Zurhausen, K.D. (2026). Open Problems in Formal Verification of Modular Compositional AI Systems. Modulith Research CIC.
Progress
Training in progress
Production modules training across multiple domains. Benchmark evaluation ongoing. Results updated continuously.
Traditional AI systems activate every parameter for every input — cost scales with model size. Modulith activates K specialist modules per position from a pool of N. Inference cost is constant regardless of how large the pool grows. Knowledge scales independently of compute.
Independent training, frozen composition, automatic routing, cross-domain improvement, structural isolation, and capability control demonstrated across multiple scales.
Production-scale modules trained and validated. Mixed-size module composition proven — the system selects modules by quality, not size.
The Modulith Lab assessment pipeline is the first deployment of the composition architecture. Three independently trained frozen specialist modules compose via the same routing mechanism described above.
The Lean 4 formal specifications serve dual purpose:
• Define safety properties for the research programme
• Generate test prompts for the assessment engine
Fresh prompts auto-generated weekly from the same specs. The pipeline runs on consumer hardware (1.4GB VRAM) at zero marginal cost. The architecture's first proof: evaluating AI safety using modular composition.
Formal verification programme for six safety properties. Machine-verified proof certificates in Lean 4. Independent academic validation.
Encrypted inference containers for regulated industries. Full attribution, guaranteed forgetting, EU AI Act compliance by construction.
Modulith Ltd — commercial entity. Patent holder. Licensing and deployment.
Modulith Research CIC — community interest company. Open research into modular AI architectures with provable safety properties.