Customizing LLMs: Prompt Engineering

Prompt Engineering or Prompting is the fundamental LLM customization technique. It is the process of designing effective prompts to guide an LLM’s response. It is simple, low-cost, and requires no model modifications.

In this post, we will explore some common prompting techniques such as:

  1. Zero-Shot Prompting – Asking the LLM to answer without prior examples.
  2. Few-Shot Prompting – Providing a few examples in the prompt to improve accuracy.
  3. Chain-of-Thought (CoT) Prompting – Encouraging step-by-step reasoning to enhance complex problem-solving.
  4. Meta Prompting – Guide the reasoning process by introducing structure, constraints, or multi-step instructions.
  5. Self-Consistency Prompting – Generate multiple solutions and select the most frequently appearing answer.
  6. Tree of Thought (ToT) Prompting – Exploring multiple reasoning paths before selecting an answer.
  7. Prompt Chaining – Not exactly a prompting technique, it is using the output of the previous prompt as input to the next prompt.
Read More

LLM Customization

A large language model or LLM is a type of machine learning model designed for natural language processing or NLP. These models have an extremely high number of parameters (trillions as of this writing) and are trained on vast amounts of human-generated and human-consumed data. Due to their extensive training on this data, LLMs develop predictive capabilities in syntax, semantics, and knowledge within human language. This enables them to generate coherent and contextually relevant responses, giving the impression of intelligence.

Read More

Transformer²: Self-Adaptive LLMs

LLMs are typically developed through a process of training on vast amounts of data, the corpus. This costs a lot of time and money. ChatGPT-3, for example, cost $10M. This cost going down but it’s remains expensive. You can avoid this cost for specific use cases by “fine-tuning” a model with specific data or you can augment their prompts with reference data as in Retrieval Augmented Generation or RAG. The next stage in LLM development are models that update/evolve through time. This is what’s discussed in Sakana AI’s paper Transformer²: Self-Adaptive LLMs.

Training YOLO to Detect License Plates

The nice thing about ChatGPT and similar systems is that the complexity of AI/ML functionality is hidden behind a friendly natural language interface. This makes it easily reachable to the masses. But behind this easy to use facade is a lot of advanced functionality that involve a sequence of data processing steps called a pipeline. An AI-powered business card reader, for example, would first detect text and then recognize the individual letters within the context of the words they belong to. A license plate reader would be similar. Detection is an important process that you often need in your AI/ML projects. And that’s why we will be looking at YOLO.

Read More