Persistent Memory Through Triple-Loop Consolidation in a Non-Gradient Dissipative Cognitive Architecture
Abstrak
Dissipative cognitive architectures maintain computation through continuous energy expenditure, where units that exhaust their energy are stochastically replaced with fresh random state. This creates a fundamental challenge: how can persistent, context-specific memory survive when all learnable state is periodically destroyed? Existing memory mechanisms -- including elastic weight consolidation, synaptic intelligence, and surprise-driven gating -- rely on gradient computation and are inapplicable to non-gradient dissipative systems. We introduce Deep Memory (DM), a non-gradient persistent memory mechanism operating through a triple-loop consolidation cycle: (1) recording of expert-specific content centroids, (2) seeding of replaced units with stored representations, and (3) stabilization through continuous re-entry. We demonstrate that discrete expert routing via Mixture-of-Experts (MoE) gating is a causal prerequisite for DM, preventing centroid convergence that would render stored memories identical. Across ${\sim}970$ simulation runs spanning thirteen experimental blocks: (i) discrete routing is causally necessary for specialization ($\text{MI}=1.10$ vs. $0.001$; $n=91$); (ii) DM achieves $R=0.984$ vs. $0.385$ without memory ($n=16$); (iii) continuous seeding reconstructs representations after interference ($R_\mathrm{recon}=0.978$; one-shot fails; $n=30$); (iv) the mechanism operates within a characterized $(K,p)$ envelope ($n=350$); (v) recording $\times$ seeding is the minimal critical dyad ($n=40$); (vi) DM outperforms non-gradient baselines (Hopfield, ESN) under matched turnover ($n=370$). These results establish DM as a falsifiable mechanism for persistent memory in non-gradient cognitive systems, with functional parallels to hippocampal consolidation.
Penulis (1)
Jianwei Lou
Akses Cepat
- Tahun Terbit
- 2026
- Bahasa
- en
- Sumber Database
- arXiv
- Akses
- Open Access ✓