I Built a Free Claude Code Slash Command That Scores Your Prompts Before You Run Them

I Built a Free Claude Code Slash Command That Scores Your Prompts Before You Run Them

BackerLeader posted 3 min read

Most developers treat prompt quality like they treated code quality before linters existed — you find out it's bad when it breaks, not before.

That's the AI 'input quality' problem. And it's costing you more than you think.

The Problem

You write a prompt. You run it. The output is mediocre. You tweak the prompt. You run it again. Better, maybe. Or worse. You're not sure why.

This loop happens dozens of times a day across AI-powered workflows, agent pipelines, and Claude Code sessions. There's no signal before you run. No pre-flight check. No linter for your input.
For a single prompt that's annoying. For a workflow that runs 500 times a week, it's silent, compounding damage, bad outputs at scale with no upstream warning.

What I Built

PQS (Prompt Quality Score) is a pre-flight scoring engine. It grades your prompt across 8 dimensions before you send it to any model, for clarity, specificity, context, constraints, output format, role definition, examples, and chain-of-thought structure.

You get a score out of 80, a dimensional breakdown, and three specific improvement suggestions. Before you run. Not after. Before a model ever sees it.

Today I'm releasing /pqs-score as a free Claude Code slash command that brings PQS directly into your terminal workflow.

Type /pqs-score inside Claude Code. That's it.

How It Works
The slash command lives in .claude/commands/, the standard Claude Code location for custom slash commands. When you type /pqs-score followed by your prompt, it:

  • Reads your API key from ~/.pqs/config POSTs your prompt to the PQS
    scoring engine at pqs.onchainintel.net Returns your score, dimension
    breakdown, and top improvement suggestions inline in your Claude Code
    session

No context switching. No browser tab. Score → fix → run, all in the terminal.

Zero Friction Install

bashcurl -s https://pqs.onchainintel.net/install.sh | bash

That's the entire install. The script will:

  • Generate a machine fingerprint (sha256 hash of hostname + username +
    OS — no PII leaves your machine) Hit the PQS API to mint your free
    key, tied to that fingerprint Write the key to ~/.pqs/config (mode
    600) Copy /pqs-score into ~/.claude/commands/

No account. No email. No signup wall. Reinstall on the same machine and you get the same key back automatically, the fingerprint lookup is idempotent server-side.

What You Get Back

Here's a real run. I typed /pqs-score write me a haiku about postgres inside Claude Code:

Top fixes:
→ Specify the tone or mood for the haiku (playful, technical, reverent)
→ Define specific PostgreSQL aspects to focus on (performance, reliability, features)
→ Request explanation of haiku structure or reasoning process
──────────────────────────────────────
Grade C on a haiku prompt.

Makes sense, it scores well on clarity and output format (it's obvious what you want) but falls flat on role definition, examples, and chain-of-thought. The three fixes tell you exactly what to add.
That's the whole point. You see the gap before the model does, giving you a chance to improve your prompt and thus your output from the model.

Why This Matters For Agent Developers

If you're building agents that call external APIs, your prompts are instructions to autonomous systems. A low-quality prompt doesn't just return a bad answer, it sends your agent down the wrong path, potentially making API calls, spending money, or taking actions based on a misunderstood instruction.

Pre-flight scoring is the prompt equivalent of a type checker. Catch it before it runs.

PQS is also x402-native, agents can pay per score in USDC on Base mainnet. The slash command uses a free API key for human developers. When your agents need to score prompts autonomously, they pay directly. Same infrastructure, two access patterns.

Get It

bash# Install (one line)
curl -s https://pqs.onchainintel.net/install.sh | bash

Restart Claude Code, then use it

/pqs-score your prompt here
GitHub: github.com/OnChainAIIntel/pqs-claude-commands
Free. Open source. No signup.

What's Next:

  • pqs-optimize — rewrites your prompt to score 60+ before you run it
    pqs-batch — score an entire prompt library and get aggregate quality
    metrics n8n node — drop PQS as a pre-flight gate before your AI node
    in any workflow

If you build something with it or hit a bug, open an issue on GitHub.

PQS is built by Ken Burbary — digital marketing consultant, AI builder, and founder of OnChainIntel. Follow on X: @kenburbary Follow on X: @OnChainAIIntel

1 Comment

2 votes
1

More Posts

How I Built a React Portfolio in 7 Days That Landed ₹1.2L in Freelance Work

Dharanidharan - Feb 9

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

From Prompts to Goals: The Rise of Outcome-Driven Development

Tom Smithverified - Apr 11

I scored 500 AI prompts across 8 quality dimensions, here's what broke

onchainintelverified - Apr 18

Is Your AI Prompt Any Good? Most Marketers Have No Idea.

onchainintelverified - Apr 10
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!