Essentials on Prompt Design

Prompt Engineering is one of the most fascinating things and highly paid jobs currently in AI Market today. But do you know how to essentially build your system with smart prompting techniques ? Well worry not here is all that you will ever need to know


Driven Technophile πŸš€ Passionate Developer πŸ™‹πŸ»β€β™€οΈ Creative Visionary ✨

August 21, 2023


Find more articles like this
Machine LearningDeep LearningNLPNeural NetworksComputer VisionLLMPALM2Neural NetworksTransformerAutoencodersData ScienceAlgorithm
Top Highlights Today!

Breaking Barriers: Democratizing Access to Vector Databases

Large language models (LLMs) and AI-related technologies are on everyone’s lips. Vector databases, the critical infrastructure for LLMs and AI applications, have gained widespread attention from a broader user base, expanding from algorithm engineers to include application and backend developers.

Essentials on Prompt Design

Prompt engineering best practices

Prompt engineering is all about how to design your prompts so that the response is what you were indeed hoping to see.

The idea of using "unfancy" prompts is to minimize the noise in your prompt to reduce the possibility of the LLM misinterpreting the intent of the prompt. Below are a few guidelines on how to engineer "unfancy" prompts.

In this section, you'll cover the following best practices when engineering prompts:

  • Be concise
  • Be specific, and well-defined
  • Ask one task at a time
  • Improve response quality by including examples
  • Turn generative tasks to classification tasks to improve safety

prompt = "What do you think could be a good name for a flower shop that specializes in selling bouquets of dried flowers more than fresh flowers? Thank you!"

print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)

Improve response quality by including examples

Another way to improve response quality is to add examples in your prompt. The LLM learns in-context from the examples on how to respond. Typically, one to five examples (shots) are enough to improve the quality of responses. Including too many examples can cause the model to over-fit the data and reduce the quality of responses.

Similar to classical model training, the quality and distribution of the examples is very important. Pick examples that are representative of the scenarios that you need the model to learn, and keep the distribution of the examples (e.g. number of examples per class in the case of classification) aligned with your actual distribution.

Zero-shot prompt

Below is an example of zero-shot prompting, where you don't provide any examples to the LLM within the prompt itself.

prompt = """Decide whether a Tweet's sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!
Sentiment:
"""

print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)

One-shot prompt

Below is an example of one-shot prompting, where you provide one example to the LLM within the prompt to give some guidance on what type of response you want.

prompt = """Decide whether a Tweet's sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!
Sentiment: positive

Tweet: That was awful. Super boring 😠
Sentiment:
"""

print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)

Few-shot prompt

Below is an example of few-shot prompting, where you provide one example to the LLM within the prompt to give some guidance on what type of response you want.

prompt = """Decide whether a Tweet's sentiment is positive, neutral, or negative.

Tweet: I loved the new YouTube video you made!
Sentiment: positive

Tweet: That was awful. Super boring 😠
Sentiment: negative

Tweet: Something surprised me about this video - it was actually original. It was not the same old recycled stuff that I always see. Watch it - you will not regret it.
Sentiment:
"""

print(generation_model.predict(prompt=prompt, max_output_tokens=256).text)

Choosing between zero-shot, one-shot, few-shot prompting methods

Which prompt technique to use will solely depends on your goal. The zero-shot prompts are more open-ended and can give you creative answers, while one-shot and few-shot prompts teach the model how to behave so you can get more predictable answers that are consistent with the examples provided.