Invest in the world's largest AI community. Earn bonus shares before October 20, 2024.
Back

Prompt Engineering and Generative AI - Fundamentals

Price
Paid
Tried by
4

About Prompt Engineering and Generative AI - Fundamentals course

This course delves into the fundamental concepts related to Prompt Engineering and Generative AI. The course has subsections on Fundamentals of Prompt Engineering, Retrieval Augmented Generation, Fine-tuning a large language model (LLM) and Guardrails for LLM.

Section on Prompt Engineering Fundaments :

The first segment provides a definition of prompt engineering, best practices of prompt engineering and an example of a prompt given to the Gemini-Pro model with references for further reading.

The second segment explains what streaming a response is from a large language model, examples of providing specific instructions to the Gemini-Pro model as well as temperature and token count parameters.

The third segment explains what Zero-Shot Prompting technique is with examples using the Gemini Model.

The fourth segment explains Few-shot and Chain-of-Thought Prompting techniques with examples using the Gemini Model.

Subsequent segments in this section shall discuss setting up the Google Colab notebook to work with the GPT model from OpenAI and provide examples of Tree-of-Thoughts prompting technique, including the Tree-of-Thoughts implementation from Langchain to solve the 4x4 Sudoku Puzzle.

Section on Retrieval Augmented Generation (RAG) :

In this section, the first segment provides a definition of Retrieval Augmented Generation Prompting technique, the merits of Retrieval Augmented Generation and applying Retrieval Augmented Generation to a CSV file, using the Langchain framework

In the second segment on Retrieval Augmented Generation, a detailed example involving the Arxiv Loader, FAISS Vector Database and a Conversational Retrieval Chain is shown as part of the RAG pipeline using Langchain framework.

In the third segment on Retrieval Augmented Generation, evaluation of response from a Large Language Model (LLM) using the RAGAS framework is explained.

In the fourth segment on Retrieval Augmented Generation, the use of Langsmith is shown complementing the RAGAS framework for evaluation of LLM response.

In the fifth segment, use of the Gemini Model to create text embeddings and performing document search is explained.

Section on Large Language Model Fine-tuning :

In this section, the first segment provides a summary of prompting techniques with examples involving LLMs from Hugging Face repository and explaining the differences between prompting an LLM and fine-tuning an LLM.

The second segment provides a definition of fine-tuning an LLM, types of LLM fine-tuning and extracting the data to perform EDA (including data cleaning) prior to fine-tuning an LLM.

Third segment explains fine-tuning a pre-trained large language model on a task specific labeled dataset in detail.

Section on Guardrails for Large Language Models:

In this section, the first segment provides a definition of Guardrails as well as examples of Guardrails from OpenAI.

In the second segment on Guardrails, examples of open source Guardrail implementations are discussed with a specific focus on GuardrailsAI for extracting information from text.

In the third section, use of GuardrailsAI for generating structured data and interfacing GuardrailsAI with a Chat Model have been explained.

Each of these segments has a Google Colab notebook included.

Company
Udemy
Resources

More gallery

Similar courses

Last Reviews

Oops! It looks like you need to sign up
Before leaving a review you need to create an account. Don't worry, it only takes a moment and gives you access to exclusive content and updates. Ready to get started?
Menu
Join us on
All rights reserved © 2024 Genai Works