Master LLM Prompting: Tips for Better Results

Master LLM Prompting: Tips for Better Results

posted Originally published at medium.com 3 min read

If you’re diving into the world of large language models (LLMs) like ChatGPT, Claude, or Copilot, you’re already aware of their potential to make our lives more manageable. However, getting the most out of these advanced tools often requires some finesse, as crafting the perfect prompt can mean the difference between a fruitful interaction and a frustrating one. Whether you’re a free subscriber who must work around limits or a premium user comfortable with unlimited access, mastering the art of prompting is essential. In this blog, I will share some practical tips to help you harness the full potential of ChatGPT (or any other LLM) effectively.

However, it’s important to note that certain techniques can vary between different LLMs. For example, assigning roles can sometimes lead to hallucinations (a major vulnerability of LLMs) when asked about imaginary concepts. In such cases, the LLM may provide incorrect information.

1. Give A Role to the LLM

When you start the prompt, the first thing should be to assign a role to the LLM. It should tell who it is and that it is an expert on the subject matter. This method can improve the LLM’s answer accuracy. For example, if you want to fix a bug in your Blazor code, say it like this:

You are an Expert Blazor Developer. You have tons of experience with the Blazor framework.…

2. Ask Output in Specific Format

Sometimes, we all make this mistake, including me. Let’s assume we want some dummy JSON data to test our code. We use an LLM to generate it. Most of the time, we just say,

“Give some dummy data to test this function.”

It will give us dummy data, but not as JSON; most probably, it will give it according to the language of our code. So, we have to tell it,

“Give this data as JSON.”

If we specify the output when we ask for the dummy data, we can get the correct output in one prompt. Here is an example:

Give me some dummy data to test this function. 
I want these data as a JSON object…

3. Ask Output in Structured Way

I will explain this using one of my experiences. Sometimes I use ChatGPT to brainstorm project ideas. I ask,

“I want to practice this thing. Bla bla bla… So, give me some details about project ideas.”

This prompt will provide project ideas. Sometimes it gives me what I can learn from these projects, explanations, etc. But sometimes it only gives me a list of project ideas. So, I have to ask again to give me these details about the above projects. But if I mention what kind of details I need to know, I can get my work done in one prompt. I have tried this, and it worked. Therefore, every time you ask something, clearly mention what type of details you need to know:

Give me a list of tv series released in 2024. 
Include, title, category, brief description, episode runtime, seasons count

4. Few Shots Prompting

Give some examples with your prompt. If you can provide some examples, it will also be helpful to get an output the same as what you are expecting. Here is an example prompt:

I want you to categorize these responses in to negative, positive 
like these exmaples.
Exmaples:
It's too hot in here - NEGATIVE
very cold.freezing - POSITIVE
not so cold. just normal - NEGATIVE
i feel like i am in a deep freezer - POSITIVE

Here is the response...

Conclusion

Mastering the art of prompting for LLMs like ChatGPT is not just about crafting the right words. It’s about clearly defining your expectations and providing context. By giving a specific role, asking for outputs in precise formats, structuring your requests thoughtfully, and including examples, you can enhance the quality of the interactions and achieve the desired results seamlessly. Implementing these strategies will empower you to make the most of this transformative technology, ultimately leading to more productive and satisfying conversations.

If you read this far, tweet to the author to show them you care. Tweet a Thanks
I had been used those techniques and you're right they are effective, additionally I want to add that when working with large code projects, is impprtant to go step by step, tell the directory structure and ask only for small blocks of code, because when ChatGPT try to generate long code much times it come with errors and incomplete, also when working with math, say how much decimals to use, if not it will alter the result.

More Posts

Useful Tools for LLM Application Development

Hirusha Fernado - Feb 20

Which is Better for Prompt Engineering: Deepseek R1 or OpenAI o1?

Shivam Bharadwaj - Feb 10

I Tested the Top AI Models to Build the Same App — Here are the Shocking Results!

Andrew Baisden - Feb 12

Transformer-Squared: The Next Evolution in Self-Adaptive LLMs

Mohit Goyal - Feb 25

The Boy Who Rewired Logic: Walter Pitts and the Dawn of Neural Networks

Mercy Kiria - Feb 23
chevron_left