LLMs are typically developed through a process of training on vast amounts of data, the corpus. This costs a lot of time and money. ChatGPT-3, for example, cost $10M. This cost going down but it’s remains expensive. You can avoid this cost for specific use cases by “fine-tuning” a model with specific data or you can augment their prompts with reference data as in Retrieval Augmented Generation or RAG. The next stage in LLM development are models that update/evolve through time. This is what’s discussed in Sakana AI’s paper Transformer²: Self-Adaptive LLMs.
Month: January 2025

The nice thing about ChatGPT and similar systems is that the complexity of AI/ML functionality is hidden behind a friendly natural language interface. This makes it easily reachable to the masses. But behind this easy to use facade is a lot of advanced functionality that involve a sequence of data processing steps called a pipeline. An AI-powered business card reader, for example, would first detect text and then recognize the individual letters within the context of the words they belong to. A license plate reader would be similar. Detection is an important process that you often need in your AI/ML projects. And that’s why we will be looking at YOLO.
Read More