Webinar "How to Instruction Tune a Base Language Model"

Data Phoenix team invites you all to our upcoming webinar that’s going to take place on December 7th, 10 am PST.

  • Topic: How to Instruction Tune a Base Language Model
  • Speaker: Harpreet Sahota (Deep Learning Developer Relations Manager at Deci)
  • Participation: free (but you’ll be required to register)

While LLMs have showcased exceptional language understanding, tailoring them for specific tasks can pose a challenge. This webinar delves into the nuances of supervised fine-tuning, instruction tuning, and the powerful techniques that bridge the gap between model objectives and user-specific requirements.

Here's what we'll cover:

  • Specialized Fine-Tuning: Adapt LLMs for niche tasks using labeled data.
  • Introduction to Instruction Tuning: Enhance LLM capabilities and controllability.
  • Dataset Preparation: Format datasets for effective instruction tuning.
  • BitsAndBytes & Model Quantization: Optimize memory and speed with the BitsAndBytes library.
  • PEFT & LoRA: Understand the benefits of the PEFT library from HuggingFace and the role of LoRA in fine-tuning.
  • TRL Library Overview: Delve into the TRL (Transformers Reinforcement Learning) library's functionalities.
  • SFTTrainer Explained: Navigate the SFTTrainer class by TRL for efficient supervised fine-tuning.

Speaker: Harpreet Sahota is a Data Scientist turned Generative AI practitioner who loves to learn, hack, and share what he figures out along the way. He has graduate degrees in math and statistics and has worked as an actuary biostatistician. He has built two data science teams from scratch in his data science career. Harpreet has been in DevRel for the last couple of years, focusing more on product and developer experience.

Don't Miss Our Upcoming Webinars!

Follow DataPhoenix on LinkedIn, X and YouTube to stay updated with our community events and the latest AI&Data industrial news.