Posts

Showing posts from 2026

Where AI Agents Can Succeed

  Where AI Agents Can Succeed 1. Software Engineering (Well-Scoped) Why it works Formal syntax and semantics Immediate falsification (compile/test) Tooling enforces correctness Failure mode Architectural judgment, long-term ownership 2. Data Transformation & ETL Pipelines Why Clear input/output contracts Schema validation Deterministic transformations Failure mode Choosing what data matters 3. Automated Testing & QA Why Binary outcomes (pass/fail) Explicit specifications No epistemic ambiguity Failure mode Designing meaningful test coverage 4. Infrastructure-as-Code / DevOps Why Declarative formats Idempotent execution Hard failure states Failure mode Understanding organizational risk 5. Formal Mathematics (Proof Assistance, Symbolic Work) Why Axiomatic constraints Proof checkers enforce truth Zero tolerance for contradiction Failure mode Inventing new axioms or concepts 6....

Toward a Governed Information Theory

Toward a Governed Information Theory From Uncertainty to Survivable Meaning PART I — THE CLASSICAL FOUNDATION AND ITS FAILURE MODES 1. Shannon Information: Uncertainty Without Meaning Entropy as unpredictability Channel capacity and transmission optimality Why Shannon intentionally excludes semantics The equivalence of random noise and structured knowledge Angle: What information theory solved—and why that solution is insufficient for intelligence. 2. Classical Information Theory Beyond Shannon Algorithmic information (Kolmogorov complexity) MDL and compressibility The determinism paradox: why computation appears to “create” information Why observer bounds are fatal to classical formulations Angle: Description length is not usability. PART II — COMPUTE-BOUNDED INFORMATION 3. Epiplexity: Learnable Structure Under Constraint Epiplexity vs time-bounded entropy Why deterministic processes generate usable structure Curriculum effects and ordering Empirical relevance to modern ML Angle: I...