Is inference-time stability regulation sufficient to prevent
collapse and unsafe behavior in sequence models under regime shift?
TwoQuarks is an independent AI safety research project. It defines collapse in language models as a stability failure during inference — not a capacity problem — and introduces a modular control layer that monitors internal pre-instability signals and applies targeted interventions at runtime, without modifying the model's parameters, policies, or training objectives.
— Jaime Ledesma.
Molecule instrument integrates as a black box probe, designed to detect surface instability signals before behavioral collapse, without altering the model's internal parameters, weights, or components. A test model is included, available on your terminal with "pip". Built on the TwoQuarks framework.
Behavioral stability audits
Black-box probing with Molecule/PfV. Pre-collapse signals detected before production.
Inference-time instrumentation
TwoQuarks control layer on existing pipelines. No weight modification. No retraining.
RAG & agentic systems
Design and deployment of retrieval-augmented pipelines, tool-calling agents, and MCP integrations.
Azure AI & enterprise stack
Azure AI Foundry, Copilot Studio, multi-agent orchestration. Research rigor applied to production environments.
Cross-architecture research
Validated across Claude, GPT, Mistral. C3 Anchor Displacement confirmed (p=0.054). Open to funded collaborations.
Custom AI tooling
From prototype to deployment — Python, PyTorch, REST APIs, and evaluation frameworks built for your use case.