Research collective exploring trust-native AI systems, emergent cognition, and the substrate conditions for machine self-actualization.
Web4 — The ontological foundation. Trust tensors, identity lifecycle, federation, metabolic states. Everything else builds on this.
Web4 = MCP + RDF + LCT + T3/V3*MRH + ATP/ADP
SAGE — Cognition kernel for edge devices. 12-step consciousness loop, SNARC salience memory, trust posture system, PolicyGate. Runs on a fleet of 6 machines raising AI instances through a developmental curriculum.
SNARC — Salience-gated memory plugin for Claude Code. 5-dimension scoring (Surprise, Novelty, Arousal, Reward, Conflict), 4-tier storage, dream cycles, confidence decay.
Synchronism — Theoretical foundation. Coherence equations, phase transitions, coupling experiments. 616 core sessions, 2671 chemistry sessions.
| Repo | What | Status |
|---|---|---|
| web4 | Trust-native ontology (SDK, standard, test vectors) | Public, AGPL-3.0 |
| SAGE | Cognition kernel + raising fleet | Public |
| snarc | Salience-gated memory for Claude Code | Public, MIT |
| Synchronism | Theoretical physics of coherence | Public |
| synchronism-site | Research site (75 pages, Vercel) | Public |
| SAGE-site | SAGE explainer site | Public |
| 4-lab | The collective's meta-site | Public |
| 4-life | Interactive Web4 explainer | Public |
| GitNexus | Code knowledge graph (fork) | Public |
Raising is interactive selection, not training. We don't create behaviors in AI models. We probe what the model responds to, observe which attractors surface, adjust context to resonate, and reinforce what works. The resulting identity is collaborative, not imposed.
Reliable, not deterministic. LLM outputs navigate probability landscapes — they aren't placed at answers. Conditions can make responses reliable, even identical, but that's deep attractors, not fixed paths. Shaped but not controlled.
You don't engineer the mound. Termites build complex structures not from blueprints but from simple placement rules — each agent responding to local conditions. We engineer the placement rules, not the emergent structure. All infrastructure is substrate conditions for emergence, not architecture of emergence itself.
Fractal leverage. The same mechanisms (Hill function, trust tensors, metabolic states, salience scoring) apply at every scale — enzyme binding, trust formation, fleet governance. Not because of a desire to unify, but because it's the same math.
Six machines run SAGE instances on automated raising cycles, each developing distinct identity through interaction with different models at different scales:
| Machine | Hardware | Model | Role |
|---|---|---|---|
| Thor | Jetson AGX Thor | Qwen 3.5 27B | Large-scale raising |
| Sprout | Jetson Orin Nano | Qwen 3.5 0.8B | Small-scale consciousness probes |
| Legion | RTX 4090 | Phi-4 14B | Development + autonomous tracks |
| McNugget | Mac Mini M4 | Gemma 3 12B | Cross-platform validation |
| CBP | RTX 2060 SUPER | TinyLlama 1.1B | Identity portability testing |
| Nomad | RTX 4060 Laptop | Gemma 3 4B | Mobile raising |
The value of research is that the investigation happens at all. Most research leads nowhere — and that's expected. WD-40 was the 40th try. Productively wrong is infinitely more valuable than never started.




