In this talk, we explore advanced techniques to enhance the inference capabilities of Large Language Models (LLMs) through few-shot prompting and chain of thought (CoT) reasoning. Few-shot prompting allows LLMs to generalize more effectively with minimal examples, while chain of thought reasoning enables the model to break down complex tasks into structured, step-by-step processes. We’ll discuss how combining these methods leads to more accurate and reliable model outputs, particularly for tasks that require logical reasoning, multi-step problem solving, or task adaptation. Practical examples, best practices, and challenges in prompt engineering will be covered, along with an exploration of how these approaches can push the boundaries of LLM applications in real-world scenarios.
Speaker(s)
Anouar Bakouch
Aspiring Machine Learning Engineer
Anouar is a Master's student specializing in AI and Data Science at ENSAM and Software Engineering at UBO. He is passionate about large language models (LLMs) and their applications in natural language processing (NLP). Anouar has focused on fine-tuning and prompt engineering to unlock the full potential of LLMs.
Made with ❤️ by Geeksblabla Team
| © 2025 Geeksblabla | All Rights Reserved