About AI Prompt Engineering - corso completo course
In this course, suitable also for beginners, we will explore some of the more advanced techniques and concepts of Prompt Engineering to make the most of the results obtained as output from a Large Language Model (LLM).
We will discover how it is possible to analyze the answers given in output by a model with a view to improving the prompts based on the capabilities of the model and the answers you want to obtain.
It includes a critical analysis of the performance of language models and improvement techniques, such as evaluating responses to specific prompts, comparison with reliable references, and inter-annotator agreement.
Iterative refinement, data augmentation, and active suggestion techniques are explored to optimize models.
Platforms such as Huggingface and OpenAssistant are used in the course to work online with LLMs.
**Topics covered include:**
- N-shot prompting: This approach allows LLMs to learn from a limited number of examples, improving their ability to generalize and generate meaningful responses.
- Chain of Thought (CoT): LLMs follow a series of self-consistent steps, similar to human thought processes, to provide more elaborate answers.
- Generated knowledge prompting: By leveraging pre-existing knowledge in LLMs, you get in-depth and accurate responses that are useful for information search activities.
- Directed stimulus prompting: By providing specific directions in prompts, LLMs can be guided to generate personalized, targeted responses.
It includes a final hands-on part that focuses on the practical application of the techniques studied.