
Prompt Engineering or Prompting is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial intelligence (AI) model (Wikipedia). It is the most fundamental LLM customization technique and is simple, low-cost, and requires no model modifications.
In this post, we will explore some common prompting techniques such as:
- Zero-Shot Prompting – Asking the LLM to answer without prior examples.
- Few-Shot Prompting – Providing a few examples in the prompt to improve accuracy.
- Chain-of-Thought (CoT) Prompting – Encouraging step-by-step reasoning to enhance complex problem-solving.
- Meta Prompting – Guide the reasoning process by introducing structure, constraints, or multi-step instructions.
- Self-Consistency Prompting – Generate multiple solutions and select the most frequently appearing answer.
- Tree of Thought (ToT) Prompting – Exploring multiple reasoning paths before selecting an answer.
- Prompt Chaining – Not exactly a prompting technique, it is using the output of the previous prompt as input to the next prompt.