TeamIDE Labs

Applied AI research. Building inference, training, and systems infrastructure from the ground up.

Foundry

Pure C machine learning framework. Zero external dependencies, full tensor operations, quantization, and training. Learn more →

Inference Engines

Two approaches to running AI locally. A universal model server and a bare-metal boot environment. Learn more →

Barswap

GPU multiplexer for dense compute systems. Unlock all your GPU hardware on commodity motherboards. Learn more →

Distributed Training

Training language model adapters across commodity laptops with zero inter-node communication. Published research on the Research page.

Memory Systems

Extended context and retrieval systems for language models. Multi-tier memory architectures that keep RAM bounded while enabling long-horizon reasoning.

AI-Native OS

Operating systems that boot directly into AI inference. From UEFI bare metal to full runtime. No Python, no Docker, no cloud.

Model Architecture

Novel hybrid architectures combining recurrent and attention mechanisms. Exploring how architectural choices affect hardware utilization and memory efficiency.

Read our published work on the Research page.

Explore our projects →