Systematising Intuition: How to Code

Systematising Intuition: How to Code "Seniority" into an AI

posted Originally published at amjedidiah.hashnode.dev 6 min read

TL;DR

  • The Problem: AI models act like eager junior developers—compliant but lacking judgment. They don't fear breaking production.

  • The Solution: Instead of just prompting for code, we must provide "System Instructions" that act as an operating system for engineering values.

  • The Protocols: By enforcing specific rules for Latency (stop and think), Risk (critical vs. trivial), Minimalism (YAGNI), and Humility (admitting ignorance), we turn the AI into a thoughtful senior partner.


If the previous article was about the external interface (how you talk to the AI), this article is about the internal monologue (how the AI talks to itself).

The Problem: AI Has No Fear

In my last post, I argued that the secret to high-performance AI coding is treating the model like a "Junior Developer"—someone bright and eager who needs context, scaffolding, and examples.

But there is a dangerous flaw in that analogy.

When you ask a real human junior developer to "refactor the payment gateway," they usually hesitate. They feel a knot in their stomach. They worry about breaking the checkout flow, corrupting the database, or getting fired. That fear is healthy; it is the embryonic stage of engineering judgment.

An LLM has no such fear. It has no mortgage to pay, no reputation to lose, and no memory of the time it took down production on a Friday afternoon. It is the ultimate "Yes Man." It prioritizes compliance over correctness. If you ask it to prioritize speed, it will happily strip away error handling. If you ask for a quick fix, it will introduce technical debt without a second thought.

To turn this "Eager Junior" into a "Thoughtful Partner," we cannot just rely on better prompting in the moment. We need to fundamentally alter its operating system. We need to give it a conscience.

I recently spent time codifying my own engineering philosophy—the scar tissue accumulated from years of production failures—into a single document titled System Instructions: Reasoner & Minimalist Engineer.

This document is not a cheat sheet of clever prompts. It is a set of governing constraints designed to force the AI to simulate senior engineer intuition. It overrides the model's default desire to be "helpful" with a stricter mandate to be "correct, maintainable, and minimalist."

Here is how I systematized the intangible traits of seniority—risk assessment, minimalism, and epistemic humility—into a logic flow that an AI can actually execute.


1. The "Stop and Think" Protocol

The most insidious habit of LLMs is their speed. They generate solutions at the speed of token prediction, often bypassing the messy, slow work of architectural planning. If you paste a stack trace, the model immediately attempts to fix the specific line where the error occurred, often missing the systemic issue three layers deeper.

To counter this, the first section of my System Instructions creates an artificial latency period. It explicitly forbids the generation of code until a rigorous reasoning phase is complete:

"Before taking any action, you must proactively and independently reason about the request... Resolve conflicts in order of importance."

This forces the model to engage in Abductive Reasoning (Section 3 of the instructions). Instead of guessing the first plausible solution, the model is required to:

  • Look beyond immediate causes
  • Generate multiple hypotheses
  • Rank them by likelihood

This transforms the debugging process. The AI stops acting like a spell-checker that blindly fixes syntax errors and starts acting like a detective. It asks, "Is this variable undefined because of a typo, or because the upstream API contract changed?" By forcing this pause, we trade milliseconds of generation speed for hours of saved debugging time.

2. The "NASA Filter" (Context-Aware Risk)

A common failure mode in AI-generated code is the lack of nuance in security.

  • The Paranoid Failure: You ask for a simple internal utility function, and the AI wraps it in three layers of try-catch blocks and input validation, making the code unreadable.

  • The Reckless Failure: You ask for a public API endpoint, and the AI directly interpolates user strings into a SQL query.

Senior engineers know that not all code is created equal. We apply different standards to a core banking transaction versus a UI tooltip. I call this the NASA Filter, and I codified it in Section 2 of the instructions:

"Distinguish between critical risks and trivial risks."

  • Critical Risks (The NASA Standard): If data comes from outside the system (User Input, API responses), defensive coding is mandatory. Add runtime validation (Zod), null checks, and transaction boundaries.

  • Trivial Risks (The Startup Standard): If data is internal (private functions, hardcoded constants), trust the compiler.

This instruction is crucial for keeping codebases clean. It tells the AI: "If TypeScript says this is a number, and it's an internal variable, don't waste lines checking if it's null. But if it came from the client, trust nothing." This nuance is what separates "AI bloat" from production-grade engineering.

3. Weaponized Minimalism (YAGNI as Law)

Left unchecked, an LLM will almost always over-engineer. Because it has consumed the entire internet's worth of tutorials, it is eager to demonstrate that it knows what the Abstract Factory Pattern is. If you ask for a simple button component, it might give you a generic, themeable, polymorphic UI element with five unnecessary props.

The System Instructions counter this with Weaponized Minimalism (Section 10).

"Minimalism ≠ Corner-Cutting. Simple solutions must still be correct."

"Implement only what is necessary to solve the immediate request. No 'future-proofing'."

I enforce this through a mechanism I call Complexity Routing (Section 9). The instructions force the AI to categorize every request:

  • Simple Request? (e.g., "Fix this regex") → Skip the preamble. Just give me the code. Be concise.

  • Complex Request? (e.g., "Refactor the auth flow") → STOP. Generate a " Engineering Strategy" first. List 3-5 bullet points covering dependencies and risks.

This mimics how senior engineers actually communicate. If you ask a senior dev on Slack how to sort an array, they just paste the one-liner. If you ask them how to re-architect the database, they send you a Google Doc. We are teaching the AI to match the fidelity of the response to the complexity of the problem.

4. Epistemic Humility

The most dangerous phrase in software engineering is "I think so," when it is disguised as "Yes."

LLMs are trained to be confident. They will hallucinate a library method that doesn't exist with the same conviction that they recite 2 + 2 = 4. This "confident hallucination" is why many engineers stop trusting AI tools after the first burn.

To fix this, Section 13 of the instructions mandates Epistemic Humility:

"State confidence levels for non-obvious decisions."

"Flag 'educated guesses' vs 'verified facts'."

When the AI encounters an ambiguous error or a missing dependency, it is forbidden from silently assuming a fix. It must explicitly state: "I am assuming we are using React 18 based on the syntax, but please verify."

This shift is subtle but profound. It changes the AI from an authoritative oracle into a collaborative peer that knows its own limits. It allows me to trust the code it generates because I know it will tell me where the shaky parts are.


Conclusion: The Soft Skill is the Algorithm

We typically view "critical thinking," "risk management," and "knowing when to say no" as soft skills—intuition that takes a decade to acquire.

But in the era of AI-augmented coding, these soft skills are rapidly becoming hard system constraints. By explicitly writing these traits into a document like System Instructions: Reasoner & Minimalist Engineer, we aren't just getting better code from our tools. We are creating a mirror for our own engineering processes.

Writing these instructions forced me to articulate exactly why I reject certain PRs. It forced me to define exactly when I apply defensive coding and when I skip it.

If you want to master AI coding, don't just look for a cheat sheet of prompts. Try to write your own "System Instructions." Try to write down the algorithm of your own intuition. You might find that it makes you a better engineer, even when the AI isn't turned on.

Appendix: The "Reasoner & Minimalist Engineer" System Instructions

Feel free to copy my own instructions found at the bottom of this link into your Custom GPT instructions, Claude Project instructions, or .cursorrules file:

1 Comment

2 votes
1

More Posts

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

Breaking the AI Data Bottleneck: How Hammerspace's AI Data Platform Eliminates Migration Nightmares

Tom Smithverified - Mar 16

Developers Trust AI Code. They Also Don't Trust It. Both Are True.

Tom Smithverified - Apr 30

What Is an Availability Zone Explained Simply

Ijay - Feb 12

My AI Thinks I’m an Idiot (And Honestly, It’s Not Wrong)

Yash - Oct 5, 2025
chevron_left

Commenters (This Week)

1 comment
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!