Thursday, February 26, 2026

Four-Attractor Micro-Prompts for Stable LLM Coding (discussion draft)

https://chatgpt.com/share/69a0998b-a090-8010-a9bc-2553726c6d4b

Four-Attractor Micro-Prompts for Stable LLM Coding (discussion draft)

A tiny prompt set that nudges any LLM toward low-ripple changes, clear intent, balanced structure, and low future rewrite risk.

Abstract

LLM-assisted coding often fails in predictable ways: it causes unexpected ripples, produces opaque intent, overfits with hacks or over-engineering, and accumulates technical debt that forces later rewrites. This short note introduces a minimal set of one-line “micro-prompts” that act like a lightweight stability controller for everyday coding tasks (write / modify / debug). The prompts are intentionally jargon-free so they work with LLMs that have no knowledge of SMFT/PORE; however, their logic is aligned with a two-layer operational view (specification vs effective behavior) and a falsifiability mindset (check ripple, check clarity, check structure, check future pressure). This mirrors the “portable routine + portable interface” stance from the Minimal Intrinsic Triple playbook.


The Micro-Prompts (copy-paste)

1) Write code (one line)

Write code with flow integrity (local changes, stable interfaces), low collapse latency (clear names, small functions), structural balance (simple, justified abstractions), and low regime pressure (minimal deps, easy to extend).

2) Modify code (one line)

Modify with the smallest change that preserves flow integrity (no ripple/API break), reduces collapse latency (clarify intent), keeps structural balance (no hacks), and lowers regime pressure (avoid new coupling/deps).

3) Debug code (one line)

Debug by reproducing and fixing the root cause, then add a regression test—preserve flow integrity, reduce collapse latency, maintain structural balance, and prevent regime pressure (no brittle band-aids).


How to Use Them (quick and practical)

Where to place them

Put the relevant one-liner at the very top of your instruction to the LLM, before details like requirements, files, stack, or constraints. This makes it act like an always-on “quality compass.”

When to use which

  • Write: greenfield function/module, new endpoint, new feature.

  • Modify: refactor, add a small behavior change, integrate a new option, update an API.

  • Debug: failing test, runtime error, incorrect output, performance regression.

How they guide the LLM in a single pass

They push the model toward:

  • smaller diffs

  • explicit intent

  • minimal necessary abstraction

  • lower dependency/coupling footprint

  • test-locked fixes for debugging


The Logic Behind Them (brief theory, no jargon required)

A) Why four attractors?

These four terms were chosen because they are widely-understood strong attractors that align with how codebases break:

  1. Flow integrity → prevents cross-module ripple and accidental coupling

  2. Collapse latency → makes intent “obvious quickly” (readable + reviewable)

  3. Structural balance → avoids both hacks and over-engineering

  4. Regime pressure → avoids building debt that triggers future rewrites

In other words, they compress a big space of “code quality” into four controllable directions.

B) The hidden “two-layer” idea

Even if you never say “PORE,” good coding is already a two-layer problem:

  • Specification layer (what you meant): requirements, invariants, interfaces, constraints.

  • Effective dynamics (what happens after change): ripple effects, coupling growth, maintenance cost, future rewrite pressure.

The Minimal Intrinsic Triple playbook formalizes this separation as Σ-level specification vs Ξ-level effective control, explicitly including “observer coupling/backreaction” (measurement and review change the system).
Your micro-prompts are a plain-English way to force the model to respect that separation: “don’t just make it pass—make it stable under future observation and extension.”

C) The hidden “harness” idea (why the guardrails matter)

A one-liner is only useful if it blocks common LLM failure modes. The small parenthetical guardrails (e.g., “no API break,” “add regression test,” “avoid new deps”) are lightweight versions of a falsifiability harness: they create checkpoints that prevent narrative-sounding output from being treated as correct. This echoes the playbook’s insistence on proxy stability, boundary accounting, probe backreaction detection, and control effectiveness as the minimum diagnostic gates.

D) Why this works on “vanilla” LLMs

LLMs don’t need to know your theory to behave better—if you give them:

  • a short objective function (the four attractors), and

  • a couple of explicit “don’t do this” boundaries (no ripple, no new deps, regression test)

That combination reliably shifts generation away from “confident chaos” toward “controlled change.”


What This Prompt Set Is (and is not)

It is

  • A minimal stability steering wheel for daily LLM coding.

  • A “portable routine” you can apply anywhere: Node, Python, C#, SQL scripts, CI pipelines.

It is not

  • A replacement for unit tests, CI, code review, or architecture.

  • A guarantee of correctness—rather, it increases the odds that what you get is locally correct and globally non-destructive.


Optional: One extra ultra-short “prefix”

If you want a single reusable header for any coding request:

Use flow integrity, low collapse latency, structural balance, low regime pressure.


Appendix A

1) A “Prompt Pack” for Quick Auto-Coding (same style)

You can treat the 3 prompts as the core, then add a few optional one-liners depending on your workflow. Keep them one line so they stay portable.

A) Plan-before-code (one line)

Before coding, write a 3-step plan and the stable interface/contract first (optimize flow integrity + low regime pressure).

B) Code review (one line)

Review the diff for ripple (flow integrity), intent clarity (collapse latency), abstraction fit (structural balance), and future rewrite risk (regime pressure); list the top 3 risks and fixes.

C) Tests (one line)

Add minimal tests that lock intended behavior (happy path + key edge cases) with low flakiness and low regime pressure (no brittle mocks).

D) Documentation (one line)

Document the “why” and invariants in 5 bullets so intent collapses fast (low collapse latency) without adding long-term pressure.

E) Refactor (behavior-preserving) (one line)

Refactor without changing behavior: reduce coupling and clarify intent (better flow integrity + lower collapse latency) while keeping structural balance and lowering regime pressure.


2) How to Use as a Minimal Workflow Loop (fast but disciplined)

When you want “quick and easy” but still stable, run this in one prompt (or as 2–3 short turns):

  1. Plan (Plan-before-code prompt)

  2. Implement (Write or Modify prompt)

  3. Prove (Tests prompt; for debug, the Debug prompt already includes regression testing)

  4. Self-Review (Code review prompt)

Example wrapper (copy-paste)

Plan (1 line): [Plan prompt]
Task: [your task]
Constraints: [stack/files/limits]
Do: implement + tests + short review

This structure is small, but it stops the most common LLM failure mode: the model “jumps” straight to code without stabilizing the interface, scope, and proof.


3) Why these prompts work (a little more theory, still brief)

A) They reduce “uncontrolled degrees of freedom”

LLMs can change anything unless you constrain them. Your four attractors act like a compact objective function that compresses “good engineering” into a few memorable directions:

  • Flow integrity → constrain the spread of changes

  • Collapse latency → constrain the time-to-understand

  • Structural balance → constrain the shape of abstractions

  • Regime pressure → constrain the future cost curve (debt, deps, coupling)

This is why they generalize across languages and codebases.

B) They create “anti-cheat” guardrails without sounding strict

Most “good coding” prompts are too vague; the model can comply rhetorically while behaving dangerously.
Your bracketed clarifications (“no ripple/API break,” “avoid new deps,” “add regression test”) are tiny but powerful because they define what not to do.

C) The debug prompt is secretly a measurement/proof protocol

“Reproduce → fix root cause → regression test” prevents the classic band-aid pattern:

  • “fix” by silencing errors

  • change behavior without noticing

  • patch symptoms while the cause remains

A regression test is the smallest durable “lock” you can add.


4) A Simple Human Checklist (useful even without an LLM)

After the model outputs code, do a 20-second scan:

  • Flow integrity: Did it touch unrelated files? change public interfaces? introduce cross-module coupling?

  • Collapse latency: Can I tell what changed and why in 30 seconds?

  • Structural balance: Did it add abstractions I didn’t need? or hack around design?

  • Regime pressure: New dependencies? more configuration? more brittleness? future refactor implied?

If any bucket fails, ask the LLM to redo with that bucket as the primary constraint.


5) Suggested “micro-variants” (still one line) for different personalities

Sometimes you want slightly different behavior from the same attractors:

Ultra-conservative modify

Modify with a surgical diff: preserve interfaces, avoid refactors, avoid new deps, keep changes local (maximize flow integrity, minimize regime pressure).

High-clarity write (teaching mode)

Write code so a junior can maintain it: explicit names, small pure functions, comments only for invariants (min collapse latency, keep structural balance).

Aggressive debugging (but still safe)

Debug by isolating the failure with the smallest reproducible test, fix root cause, then add regression coverage (no band-aids, no ripple).


6) One more “invented” 1-liner: the universal controller

If you only want one line that works for almost any coding request:

Do the smallest correct change, prove it with a test, and avoid ripples—optimize flow integrity, low collapse latency, structural balance, and low regime pressure.


 

Appendix B

Core 3 (copy-paste 1-liners)

  1. Write code
    Write code with flow integrity (local changes, stable interfaces), low collapse latency (clear names, small functions), structural balance (simple, justified abstractions), and low regime pressure (minimal deps, easy to extend).

  2. Modify code
    Modify with the smallest change that preserves flow integrity (no ripple/API break), reduces collapse latency (clarify intent), keeps structural balance (no hacks), and lowers regime pressure (avoid new coupling/deps).

  3. Debug code
    Debug by reproducing and fixing the root cause, then add a regression test—preserve flow integrity, reduce collapse latency, maintain structural balance, and prevent regime pressure (no brittle band-aids).


Quick auto-coding add-ons (1-liners)

  1. Clarify requirements
    If anything is unclear, ask up to 3 precise questions; otherwise state assumptions explicitly and proceed with flow integrity, low collapse latency, structural balance, low regime pressure.

  2. Plan-before-code
    Before coding, write a 3-step plan and the stable interface/contract first (optimize flow integrity + low regime pressure).

  3. Code review
    Review the diff for ripple (flow integrity), intent clarity (collapse latency), abstraction fit (structural balance), and future rewrite risk (regime pressure); list the top 3 risks and fixes.

  4. Write tests
    Add minimal tests that lock intended behavior (happy path + key edge cases) with low flakiness and low regime pressure (no brittle mocks).

  5. Refactor (behavior-preserving)
    Refactor without changing behavior: reduce coupling and clarify intent (better flow integrity + lower collapse latency) while keeping structural balance and lowering regime pressure.

  6. Docs (tiny)
    Document the “why” and invariants in 5 bullets so intent collapses fast (low collapse latency) without adding long-term pressure.

  7. Dependency discipline
    Prefer standard library and existing project patterns; add new dependencies only if they reduce long-term regime pressure more than they add.

  8. Performance tuning (safe)
    Optimize only after measuring; keep flow integrity and structural balance intact, and don’t trade clarity for micro-gains (avoid regime pressure).

  9. Stop scope creep
    Do not change unrelated code; keep edits tightly scoped to preserve flow integrity and minimize regime pressure.


Micro-variants (still 1 line)

  1. Ultra-conservative modify
    Modify with a surgical diff: preserve interfaces, avoid refactors, avoid new deps, keep changes local (maximize flow integrity, minimize regime pressure).

  2. High-clarity write (teaching mode)
    Write so a junior can maintain it: explicit names, small pure functions, comments only for invariants (min collapse latency, keep structural balance).

  3. Aggressive but safe debug
    Debug by isolating the failure with the smallest reproducible test, fix root cause, then add regression coverage (no band-aids, no ripple).


One universal controller (if you only want 1 line)

  1. Universal
    Do the smallest correct change, prove it with a test, and avoid ripples—optimize flow integrity, low collapse latency, structural balance, and low regime pressure.

 

Appendix C

Differences between Core 3 and 1 ABC?

 

What you’re comparing

  • Core 3 = task-specific execution prompts (Write / Modify / Debug)

  • “1 ABC” (your add-ons 4–6, i.e., Ask-clarify / Plan / Review) = process control prompts that wrap around execution

Think of it as:

ABC = steering + brakes + dashboard
Core 3 = engine modes


Core 3 (Write / Modify / Debug): “Do the work”

These tell the LLM how to produce the primary artifact:

  • Write → generate new code

  • Modify → make the smallest safe diff to existing code

  • Debug → fix a failure (and lock it with a regression test)

They bias the output style and scope during the actual coding action.


ABC (Ask / Plan / Review): “Control the work”

These don’t mainly generate code; they reduce failure modes around coding:

A) Clarify requirements (Ask)

Prevents the model from guessing wrong.
If unclear, it asks up to 3 targeted questions; otherwise it states assumptions.

B) Plan-before-code (Plan)

Prevents “jumping straight to code.”
Forces a small 3-step plan + stable interface/contract before implementation.

C) Code review (Review)

Prevents self-deception after coding.
Makes the model scan for the four risks (ripple, clarity, structure, future pressure) and list the top 3 issues.


Practical usage pattern (fast)

  • When you want one-shot coding: use Core 3 only.

  • When the task matters / codebase is non-trivial: use ABC + Core 3.

Example “quick but safer” single prompt:

  1. (A) Clarify if needed

  2. (B) 3-step plan

  3. (Core 3) Write/Modify/Debug

  4. (C) Self-review top 3 risks

  5. (Optional) Tests prompt


Why keep them separate?

Because they control different layers:

  • Core 3 controls what kind of code action happens.

  • ABC controls whether the action is well-posed, scoped, and verified.

If you want, I can label them as a compact pack:

  • ABC = Setup + Validate

  • Core 3 = Execute

 

Appendix D 

Is FCBP corresponding to the 4 operators in PORE?

 

Not 1-to-1, but they’re complementary and tightly coupled.

  • FCBP = the stability compass / objective function (how you judge whether a change is “good”).

  • 4 operators (Pump / Probe / Switch / Couple) = the intervention primitives (how you change the system).

So in PORE terms: FCBP is measurement + steering, operators are actuation.


How to think about it

FCBP (Compass)

A 4-axis “quality state” you want to preserve/improve:

  • F (Flow): ripple / interface stability / locality

  • C (Clarity): intent obviousness / time-to-understand

  • B (Balance): right abstraction level / no hacks vs over-engineering

  • P (Pressure): future rewrite risk / debt / coupling / dependency burden

Operators (Actions)

A 4-move “control grammar”:

  • Probe: measure / isolate / reproduce

  • Pump: inject resources/structure (e.g., add helper, add tooling, add refactoring energy)

  • Switch: change regime/mode (e.g., architectural decision, pattern swap, breaking change)

  • Couple: increase closure/constraints (e.g., contracts, types, tests, invariants, interface boundaries)


Practical mapping (not equivalence, but “what each move tends to hit”)

OperatorPrimary FCBP impact (typical)Common risk
ProbeC↑ (clarifies what’s true), often P↓ if it yields tests/regressionsToo much probing → slows work, adds tooling friction (P↑)
CoupleF↑ (stabilizes boundaries), P↓ long-term (tests/contracts), B↑ (invariants)Over-coupling → rigidity, brittleness (P↑, B↓)
SwitchCan P↓ long-term (escape bad architecture), can B↑ (better model)Short-term F↓ (ripples), and can spike P↑ if rushed
PumpCan C↑/B↑ (add structure, helpers), can P↓ if it removes pain“Pump junk” (new deps, helpers, layers) → P↑, B↓

Key point: each operator is a lever; FCBP tells you whether the lever move is net-stabilizing.


The clean PORE interpretation

If you want a crisp relationship:

  • Operators = “What action channel am I using?”

  • FCBP = “What state variables am I optimizing / keeping within bounds?”

So the “PORE loop” looks like:

  1. Pick an operator (Probe/Pump/Switch/Couple)

  2. Apply it

  3. Evaluate effect in FCBP

  4. Adjust (choose another operator or tune scope)


Example: debugging in operator language (FCBP-guided)

Your single-line debug prompt implicitly does:

  • Probe: reproduce the bug (C↑)

  • Switch/Pump: implement root-cause fix (B↑, avoid hacks)

  • Couple: add regression test (F↑, P↓)

  • Use FCBP to prevent “band-aid patches” (which look fast but spike P and harm B)


Bottom line

FCBP is not the 4 operators.
FCBP is the compass you optimize, while the 4 operators are the moves you use—and FCBP is how you prevent those moves from becoming “blinding” or arbitrary.

If you want, I can write a one-line “operator selector” prompt: given a task, it chooses which operator(s) to apply and predicts the FCBP tradeoffs before coding.

One-line “Operator Selector” prompt examples (copy-paste)

  1. Universal selector (most useful)
    Select the minimal PORE operator sequence (Probe/Pump/Switch/Couple) for this task, predict the FCBP tradeoff (Flow/Clarity/Balance/Pressure), then execute with the smallest provable change.

  2. Write-code selector
    For writing this feature, choose the best operator sequence (Probe→Pump→Couple or alternatives), state the expected FCBP impact, then implement with minimal ripple and clear intent.

  3. Modify-code selector
    For this modification, pick the smallest operator set needed (often Probe→Pump, optionally Couple), state the FCBP impacts, then apply a surgical diff with tests if risk is nontrivial.

  4. Debug selector
    For this bug, choose the operator sequence (Probe to reproduce, Pump/Switch to fix, Couple to lock via regression test), state FCBP impacts, then deliver the minimal root-cause fix.

  5. Refactor selector
    For this refactor, choose whether it’s Pump (structure), Switch (regime/pattern), and/or Couple (contracts/tests), state expected FCBP tradeoffs, then refactor behavior-preserving with proof.

  6. Review selector
    Read this diff and infer which operator was applied; evaluate whether the FCBP outcome improved, then recommend the smallest operator adjustment to reduce Pressure without breaking Flow.

  7. Safety-first selector (anti-blindness)
    Choose operators by minimizing irreversible change: Probe first; only Switch if it reduces long-term Pressure; Couple only to lock invariants; then implement and justify the FCBP tradeoff.

 

 

 

 

© 2026 Danny Yeung. All rights reserved. 版权所有 不得转载

 

Disclaimer

This book is the product of a collaboration between the author and OpenAI's GPT-5.2, X's Grok language model. While every effort has been made to ensure accuracy, clarity, and insight, the content is generated with the assistance of artificial intelligence and may contain factual, interpretive, or mathematical errors. Readers are encouraged to approach the ideas with critical thinking and to consult primary scientific literature where appropriate.

This work is speculative, interdisciplinary, and exploratory in nature. It bridges metaphysics, physics, and organizational theory to propose a novel conceptual framework—not a definitive scientific theory. As such, it invites dialogue, challenge, and refinement.


I am merely a midwife of knowledge. 

 

 

No comments:

Post a Comment