| Criterion | Target | Result | Status |
|---|---|---|---|
| CK3.0: Calibration Present | Required | — | — |
| CK3.1: Ω-Grid Coverage | ≥ 64 pairs | — | — |
| CK3.2: Persistence Diversity | CV(P) ≥ 0.3 | — | — |
| CK3.3: Re-entry Detection | Total R ≥ 1 | — | — |
| CK3.4: Gate-Lock Contrast | P_max / P_min ≥ 3.0 | — | — |
Delta from v0.1:
Added mandatory Ω-calibration pre-pass that measures Σ₂⁰ support before any gate sweep. This prevents the "flat P(g)=1" failure mode where Ω₂ range is completely below the actual signal.
v0.1.1 adds:
File: kappa3_reentry_targeting_protocol.md
If CK3.3 fails (R=0), use this escalating intervention strategy to force re-entry opportunity:
Grid: 64×64
λ: 0.108
Stride: 2 (was 20)
Observable: Global Variance
Expected: 10× temporal resolution unmasks sub-relaxation fluctuations
Runtime: ~10 min
[Same as Run 1]
Observable: Windowed Contrast (w=5)
Expected: Local contrast fluctuations visible
Runtime: ~10 min
[Same as Run 2]
λ: 0.15 (was 0.108)
Expected: Stronger spatial competition induces micro-rebounds
Runtime: ~10 min
Total R ≥ 1
(CK3.3 passes)
Note: All three interventions are measurement-only adjustments (no dynamics changes, no validator weakening). Both VALIDATED (with re-entry) and REJECTED (monotone) are valid κ₃ results.
If CK3.0 passes but CK3.2-CK3.4 fail, the correct conclusion is:
"Under the measured Σ₂⁰ support, the tested Ω₂ range does not discriminate observability."
NOT: κ₃ failed. NOT: observability is trivial. NOT: dynamics are absent.
This is purely a statement about measurement calibration.
Persistence Score:
P(g) = (1/T) · Σₖ 𝟙[Σ₂⁰(tₖ) > Ω₂]
Fraction of measurement times where observability signal is above gate threshold.
Re-entry Count:
R(g) = #{k>0 | bₖ₋₁=0 ∧ bₖ=1}
Number of times observability returns after being below threshold (upcrossing events).
Gate-Lock:
Lock(g) ⇔ P(g) ≥ 0.8
Operational label for high-persistence regimes (not a mechanism claim).
Exported data conforms to the formal κ₃ JSON schema (validator-ready):
Schema guarantees: