Keras 3 Persistence, Duplicate‑Edge Graph Export, and Hyperdense Connectivity Intensity Fix #199
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Core Additions
Native Keras 3 .keras round‑trip (architecture + weights + optimizer state) via save_model_and_meta / load_model_safe (safe_mode).
Deterministic seeding utilities for reproducible evaluation and resume training.
Graph exporter that preserves duplicate inbound edges (no accidental dedup), enabling faithful reconstruction in future backends (Torch / ONNX).
Serialization audit script to surface unregistered custom layers early.
CI workflow (minimal deps) exercising persistence and custom layer round‑trip.
Connectivity Semantics Fix
Reinterprets p_lateral_connection as an intensity λ (expected multiplicity) rather than a Bernoulli probability.
Supports hyperdense regimes (λ > 1) discovered by prior TPE / Bayesian optimization; removes accidental clamping that collapsed search space.
Adds floor(λ * decay) + fractional Bernoulli sampling logic for duplicate lateral edges (stable, lightweight alternative to Poisson draws).
Updated docstrings and terminology to prevent future regressions.
Testing & Validation
Round‑trip persistence test: prediction parity + resume training parity within tolerance.
Custom layer safe_mode serialization test (no custom_objects needed).
Duplicate edge fidelity verified in exporter (repeated inbound edges retained in JSON spec).
New statistical expectation tests: empirical mean multiplicity ≈ λ * decay within tight tolerance.
Gating / max consecutive lateral connection behavior preserved (legacy invariant).
Added intensity sampling unit tests (fractional part, negative intensity → zero multiplicity).
Risk & Mitigation
Wide try/except in exporter isolated; future validate_spec() planned.
No shape/dtype metadata yet—intentional to keep initial spec minimal/stable.
Intensity semantics documented to prevent reintroduction of probability clamping.
Why It Matters
Deterministic, restartable training flows (optimizer state continuity).
Faithful structural export unlocks cross‑framework rebuilds and visualization.
Preserves hyperdense architectural expressivity critical to observed performance gains.
Establishes a clean, versionable contract for future tooling (spec validators, Torch builder).
Follow‑Up (Not in this PR)
Add shapes/dtypes to spec metadata.
Negative custom layer test (unregistered should fail clearly).
Idempotent save→load→save validation.
Multi‑IO + regularization layer export tests.
Optional Poisson-mode sampler (if ever needed) behind a flag.