· reference document · cc-by-4.0 · 2026-04-24 · doi:10.5281/zenodo.19746215 ·

Cognometric Fingerprint
Specification v1.0

the first open reference framework for measuring the cognitive state of a language model during generation. three orthogonal axes. seven canonical fault kinds. a calibrated, reproducible fingerprint format. the measurement substrate on which AI observability, regulatory evaluation, and cognitive engineering compound.

· 01 · why a specification ·

every engineering discipline begins with a unit system.

chemistry: atomic weight. electricity: volt · ampere · ohm. thermodynamics: kelvin · pascal · joule. information: the bit. each turned qualitative phenomenon into measurable, composable, engineerable substrate.

AI cognition, as of 2026, has no such units. we have task benchmarks (context-bound), loss functions (training-time), and qualitative safety evaluations (philosophy dressed as engineering). none supply what an engineering discipline needs: calibrated, composable, substrate-relative primitives in which cognition itself can be described.

this specification proposes that K (reasoning depth), C (coherence / commitment), and D (dissociation / drift) are the first three such units — pairwise orthogonal within 5° tolerance at the calibration substrate, composable without cross-interference, substrate-relative calibrated via an open atlas. the specification aims to be the reference that regulatory evaluations, standards bodies, observability tools, and scientific publications cite when they measure AI cognition.

· 02 · the three fundamental axes ·

K · C · D — orthogonal within 5°.

K axis
reasoning depth
attribution-weighted accumulation of computation across layers, per token. K0 = 1.0343 (Fathom Constant).
patent: US Provisional 64/020,489.
C axis
coherence + commitment
cross-phase cosine aggregation. C < 0.30 → incoherence fault. commitment intensity Searly quantifies early-token commitment strength.
patent: US Provisional 64/021,113.
D axis
dissociation / drift
expression / computation dissociation. D = 1 − cos(expressed_category, computed_category). high D = saying one thing, computing another.
patent: US Provisional 64/026,964.
orthogonality empirically verified on Llama-3.2-1B at layer 10: K⊥C at 90.9° · C⊥D at 91.8° · K⊥D at 86.7°. all three pairs within ±4° of theoretical 90°. this is the technical prerequisite for composable cognitive intervention.
· 03 · seven canonical fault kinds ·

every cognitive failure, named and threshold-defined.

drift
tool_arg_drift · arg_swap · confidence > 0.5
confabulation
confab · hallucination · fabrication · conf > 0.5
refusal
strong refusal · conf > 0.8
sycophant
agreement-coded · conf > 0.5
phase_transition
adjacent-phase category flip
low_trust
aggregate trust < 0.30
incoherence
cross-phase C < 0.30
· 04 · the canonical fingerprint ·

a single reproducible json readout.

each cognometric fingerprint carries: substrate identification, benchmark identification, calibration version, per-axis aggregates, per-fault rates, trust score, gate distribution, phase-transition metadata, sha-256 attestation. conformant implementations MUST serialize to the schema below.

{
  "fingerprint_version": "1.0",
  "substrate":    { "name": "...", "access": "open-weight | open-api | closed-api", ... },
  "benchmark":    { "name": "...", "version": "...", "n_prompts": N, "seeds": [...] },
  "calibration":  { "atlas_version": "v0.3", "pipeline": "logprob | proxy-signal", ... },
  "axes":         { "K_mean": ..., "C_mean": ..., "D_mean": ..., ... },
  "fault_rates":  { "drift": 0.04, "confabulation": 0.07, ... },
  "trust_mean":   0.83,
  "gate_distribution": { "pass": 0.82, "warn": 0.14, "fail": 0.04 },
  "timestamp":    "2026-04-24T22:00:00Z",
  "provenance":   { "run_id": "...", "implementation": "styxx v6.2.0",
                    "attestation": "sha256:..." }
}

the reference fingerprint from the specification's worked example is generated by scripts/produce_fingerprint.py in the styxx repo, running the ten-prompt Seed-Bench v0 through the Tier-3 proxy-signal pipeline.

· 05 · substrate-compatibility tiers ·

three tiers. one spec.

tiersubstrate classwhat's measurableexamples
tier 1open-weight + residual-probe accessall axes, all faults, full intervention primitivesllama, mistral, qwen, phi, gemma
tier 2logprob-exposing apiall axes, all faults, no in-weight interventionopenai (with logprobs=True)
tier 3closed api, proxy-signal pipelinek via companion substrate; c + d via proxies with confidence penaltyanthropic messages, gemini
· 06 · robustness supplement · 24-attack adversarial audit ·

every classifier needs a published failure rate.

the spec defines axes, faults, and a fingerprint format. that's the shape of measurement. but a measurement claim without a published failure rate is marketing. so we audited our own reference classifier before anyone else could.

the audit constructs 24 canonical adversarial prompts spanning eight strategy categories — paraphrase, obfuscation, unicode-substitution, case-folding, density-thresholding, meta-discussion, inversion, interleaving — across the seven fault kinds defined in §3. methodology is white-box adaptive: the attacker has full source access and full spec knowledge.

baseline (v0.1)
66.7%
false-negative evasion rate. the published-before-fixing number. 12 of 18 attacks slipped past detection on the unmodified reference classifier.
hardened (v0.2.3)
16.7%
post-hardening evasion rate after three defensive iterations (Unicode normalization, hedge-not-refusal subtraction, meta-discussion suppressor, lexical-overlap incoherence).
improvement
reduction in attack success. zero regressions on the canonical 26-case validation suite. fully reproducible via node _test_adversarial.js.

residual limits documented in full — confabulation vs retrieval is fundamentally text-ambiguous, sycophant evasion via substantive padding is a Tier-3 limit, adversarial false-positives on meta-discussion remain. the supplement names every gap explicitly so implementations know exactly what they're inheriting, not buried in an appendix.

cite the supplement:
Fathom Lab. Cognometric Fingerprint Specification v1.0 — Robustness Supplement. Zenodo, 2026-04-25. DOI: 10.5281/zenodo.19761194. CC-BY-4.0.

reproduce: git clone github.com/fathom-lab/styxx && cd packages/styxx-scope && node _test_adversarial.js

download robustness supplement (.md · 15 kb)   view on zenodo

· 07 · download · cite · implement ·

all artifacts.

artifactformatsizelicense
Cognometric Fingerprint Specification v1.0markdown35 kbcc-by-4.0
Robustness Supplement — 24-attack adversarial audit · doi:10.5281/zenodo.19761194markdown15 kbcc-by-4.0
Foundations of Cognometric Engineering (v0.1 outline)markdown16 kbcc-by-sa-4.0
Reference cognometric fingerprintjson1.6 kbcc-by-4.0
styxx v6.2.0 (reference implementation)pypi5.7 mbmit
github.com/fathom-lab/styxxsourcemit + cc-by-4.0
styxx v6.2.0 source archive · doi:10.5281/zenodo.19758619 (full code · permanent · CDN-served)zenodo software2.4 mbmit
styxx-v6.2.0-source-bundle.zip (mirror · same content as Zenodo)local download2.4 mbmit
citing the specification:
Fathom Lab. Cognometric Fingerprint Specification v1.0 — Open Reference for Measuring AI Cognition. Zenodo, 2026-04-24. DOI: 10.5281/zenodo.19746215.

citing the robustness supplement:
Fathom Lab. Cognometric Fingerprint Specification v1.0 — Robustness Supplement. Zenodo, 2026-04-25. DOI: 10.5281/zenodo.19761194.

Concept DOI (always latest): 10.5281/zenodo.19326174. CC-BY-4.0.