Why you should learn Prompt Engineering

Why you should learn Prompt Engineering

posted Originally published at pythonflow.com 1 min read

When I first heard about prompt engineering,

I thought:

it's a scam.
How can "Explaining something in natural language(i.e. English)" be engineered?
Even it is, it must be over-engineering
However, I was wrong.

By changing(engineering) the prompt, you can get:

  • More accurate output
  • More succinct output
  • Less noise, and exactly what you want

With LLM's, you want to hit the bullseye, and not the sides, as much as possible, to reduce the hallucinations.

Here are 2 prompt engineering techniques:

1. Few-Shot Prompting

Give examples of the format you want.

The model learns from the examples in the prompt:

This is an example input:

Input: London,
Output: LON.
Input: Stockholm,
Output: ARN.
Input: Copenhagen
Output: ?

And AI would output this:

CPH

2. Chain-of-Thought(CoT)

Chain-of-thought is about forcing reasoning steps before the final answer.

Without the chain-of-thought(CoT), you could create a prompt like this:

A shop sells a laptop for $1000.
There is a 20% discount,
then 10% tax is applied.
What is the final price?

You might get the right answer.

You might not.

The model may shortcut or miscalculate.

Using Chain-of-thought, you would write this query instead:

Think step by step.
First calculate the discounted price.
Then apply tax.
Then give the final answer.

Now the model does something like:

20% of 1000 = 200
Discounted price = 800
10% tax on 800 = 80
Final price = 880
Answer: $880

These are just 2 out of many techiques of prompt engineering.

By getting the habit of creating deliberate prompts, you can get better results from AI.

Considering how much AI entered our daily lives, I think learning prompt engineering is huge!

1 Comment

0 votes

More Posts

I’m a Senior Dev and I’ve Forgotten How to Think Without a Prompt

Karol Modelskiverified - Mar 19

Sovereign Intelligence: The Complete 25,000 Word Blueprint (Download)

Pocket Portfolioverified - Apr 1

AI Reliability Gap: Why Large Language Models are not for Safety-Critical Systems

praneeth - Mar 31

The Privacy Gap: Why sending financial ledgers to OpenAI is broken

Pocket Portfolioverified - Feb 23

Architecting a Local-First Hybrid RAG for Finance

Pocket Portfolioverified - Feb 25
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

1 comment
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!