Don’t Just Ask AI for JSON. Design the Output.

Don’t Just Ask AI for JSON. Design the Output.

Leader posted Originally published at dechive.dev 5 min read

When we use AI in a chat window, a little extra text usually does not matter.

If the model says:

Sure, here is the result:

before answering, we can still read it.

If it adds:

Hope this helps!

after the answer, we can simply ignore it.

But code cannot ignore things the same way.

When AI output needs to be passed into a parser, stored in a database, sent to another service, or used inside an automation pipeline, the shape of the answer matters as much as the answer itself.

This is where many AI workflows break.

We ask:

Please output the result in JSON.

And the AI replies:

Of course! Here is the JSON:

```json
{
  "sentiment": "negative",
  "product": "Notebook Battery",
  "issue": "Battery lasts less than 3 hours"
}

Let me know if you need anything else!


To a person, this looks fine.

To code, this may be a problem.

The JSON may be wrapped in a code block.  
There may be text before it.  
There may be text after it.  
The keys may be wrong.  
The structure may change next time.  
The output may look like JSON but fail when parsed.

That is the structured output problem.

## “Give me JSON” is not enough

A common mistake is thinking that structured output is just a matter of asking more clearly.

```txt
Output in JSON.

Then:

Output only JSON.

Then:

Do not include any explanation. JSON only.

These instructions help.

But they do not fully solve the problem.

LLMs generate text probabilistically. They are very good at continuing patterns. In normal conversations, models often include polite openings, explanations, summaries, and helpful closing sentences.

So even when we ask for JSON, the model may still follow the conversational pattern it has learned.

This does not mean the model is broken.

It means we are asking a language model to produce something that is not really language in the usual sense. JSON is not a friendly paragraph. It is a strict structure.

And strict structures need more than polite instructions.

The three ways structured output often breaks

In practice, AI-generated JSON usually fails in a few familiar ways.

1. Text before the JSON

The model adds something like:

Sure, here is the result:

before the object.

A human can ignore it.
JSON.parse() cannot.

2. Text after the JSON

The model gives valid JSON, then adds an explanation:

This analysis shows that the user is unhappy with the battery performance.

Now the output is no longer pure JSON.

3. JSON-like text that is not valid JSON

This is the most subtle failure.

{
  sentiment: "negative",
  'product': "Notebook Battery",
  "issue": "Battery life",
}

It looks close enough to a human.

But it is not valid JSON.

Missing quotes, single quotes, trailing commas, comments, extra fields, wrong types — these are all small mistakes that can break a pipeline.

Structure is part of the design

The important shift is this:

Structured output is not just about formatting.

It is about designing the boundary between AI and the rest of the system.

When the AI is only talking to a person, the output can be flexible.

But when the AI is talking to code, the output must be predictable.

A paragraph is useful for reading.
A checklist is useful for review.
A table is useful for comparison.
JSON is useful when another system needs to process the result.

The same answer becomes more valuable when it has the right shape.

Start by separating thinking from output

One practical pattern is to separate the model’s reasoning space from the final output space.

Instead of saying:

Analyze this review and output JSON.

We can say:

First, analyze the review.

Then output only the final JSON in this exact format:

{
  "sentiment": "positive | neutral | negative",
  "product": "string or null",
  "main_issue": "string or null"
}

The idea is simple.

Let the model think freely where format does not matter.
Then make the final output strict.

This reduces the pressure on the model to reason and maintain a rigid format at the same time.

Examples are stronger than explanations

Another useful pattern is showing the model exactly what we want.

Instead of only explaining the format, provide examples.

Input:
"Galaxy S24 battery drains too fast."

Output:
{"sentiment":"negative","product":"Galaxy S24","issue":"battery life"}

Input:
"iPhone camera is amazing."

Output:
{"sentiment":"positive","product":"iPhone","issue":"camera"}

Now process this:

Input:
{{new_review}}

Output:

This works because the model continues the pattern.

If every example starts directly with JSON, the model is more likely to continue with JSON instead of adding a friendly sentence first.

In prompt engineering, examples often beat instructions.

For serious systems, prompts are not the final layer

Prompting can improve reliability.

But if the output is important enough, prompts alone may not be enough.

A production system may need stronger layers:

1. Clear output instructions
2. Separate reasoning from final output
3. Explicit schema
4. API-level structured output or tool calling
5. Validation and retry logic

This is the difference between asking for structure and designing for structure.

If the output fails, the system should not simply crash.

It should parse, validate, detect the error, and recover if possible.

For example:

The previous response could not be parsed as JSON.
The error was: trailing comma after final field.
Please return the same result as pure valid JSON only.

This kind of recovery loop treats AI output as something that can fail.

And that is a healthier assumption.

Structured output is about trust

The more AI becomes part of workflows, agents, and automation systems, the more important structured output becomes.

If AI is only helping us think, flexible text is fine.

But if AI output becomes input for another process, the format must be reliable.

A workflow cannot depend on “probably valid JSON.”
An automation cannot depend on “usually parseable output.”
A database pipeline cannot depend on “the model will likely follow instructions.”

At that point, structured output is not a nice-to-have.

It is part of system design.

The real lesson

The lesson is not:

Ask AI for JSON.

The lesson is:

Design the output so another system can trust it.

That means being clear about format, separating reasoning from final output, giving examples, using schemas when needed, and validating the result before trusting it.

AI can generate useful answers.

But useful answers are not always usable outputs.

When we work with AI as developers, we should not only ask:

Is the answer correct?

We should also ask:

Can this output be safely used by the next step?

That is the real reason structured output matters.


Originally published at Dechive:
https://dechive.dev/en/archive/prompt-structured-output

More Posts

Your AI Doesn't Just Write Tests. It Runs Them Too.

Kevin Martinez - May 12

Sovereign Intelligence: The Complete 25,000 Word Blueprint (Download)

Pocket Portfolioverified - Apr 1

AI Reliability Gap: Why Large Language Models are not for Safety-Critical Systems

praneeth - Mar 31

Architecting a Local-First Hybrid RAG for Finance

Pocket Portfolioverified - Feb 25

The Privacy Gap: Why sending financial ledgers to OpenAI is broken

Pocket Portfolioverified - Feb 23
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

4 comments
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!