
Prompt Engineering or Prompting is the fundamental LLM customization technique. It is the process of designing effective prompts to guide an LLM’s response. It is simple, low-cost, and requires no model modifications.
In this post, we will explore some common prompting techniques such as:
- Zero-Shot Prompting – Asking the LLM to answer without prior examples.
- Few-Shot Prompting – Providing a few examples in the prompt to improve accuracy.
- Chain-of-Thought (CoT) Prompting – Encouraging step-by-step reasoning to enhance complex problem-solving.
- Meta Prompting – Guide the reasoning process by introducing structure, constraints, or multi-step instructions.
- Self-Consistency Prompting – Generate multiple solutions and select the most frequently appearing answer.
- Tree of Thought (ToT) Prompting – Exploring multiple reasoning paths before selecting an answer.
- Prompt Chaining – Not exactly a prompting technique, it is using the output of the previous prompt as input to the next prompt.