Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Fine-Tuning OLMo the Open Language Model

AI Makerspace is hosting a deep dive into the OLMo series of models, including a discussion of the Dolma pre-training dataset as the model architectures leveraged for each of the three newly released versions: OLMo-1B, OLMo-7B, and OLMo-7B-Twin-2T!

Sarah DeSouza profile image
by Sarah DeSouza
Fine-Tuning OLMo the Open Language Model

On February 1, 2024, The Allen Institute for AI (AI2) released OLMo, the first “truly open” LLM. This includes the complete set of unsupervised pre-training data, the code used for training, the model weights, and the evaluation suite.

​According to the OLMo project lead Noah Smith, “With OLMo, open actually means ‘open,’ and everyone in the AI research community will have access to all aspects of model creation, including training code, evaluation methods, data, and so on”

​In this event, we’ll do a deep dive into the OLMo series of models, including discussions of the Dolma pretraining dataset as the model architectures leveraged for each of the three newly released versions: OLMo-1B, OLMo-7B, and OLMo-7B-Twin-2T. We’ll also discuss and demonstrate the performance directly off the shelf for OLMo-7B base using the Paloma benchmark. Finally, we’ll fine-tune the model using a Quantized Low-Rank Adaptation (QLoRA) approach on a dataset from AI2’s open-instruct GitHub repo!

​If you want to understand how to build, ship, and share LLM applications at the open-source edge of AI, this is the event for you!

Who should attend the event?

  • ​Learners who want to understand why OLMo is the first “truly open” LLM of its kind.
  • ​AI Engineers and enthusiasts who want to fine-tune the latest open LLM on their hardware.
  • ​Practitioners who want to understand the technical details of the AI2 OLMo series of models.

Speakers:

  • Dr. Greg Loughnane is the Co-Founder & CEO of AI Makerspace, where he serves as an instructor for their AI Engineering Bootcamp. Since 2021 he has built and led industry-leading Machine Learning education programs.  Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher.  He loves trail running and is based in Dayton, Ohio.
  • Chris Alexiuk is the Co-Founder & CTO at AI Makerspace, where he serves as an instructor for their AI Engineering Bootcamp. Previously, he’s held roles as a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

​Follow AI Makerspace on LinkedIn & YouTube to stay updated with workshops, new courses, and opportunities for corporate training.

Sarah DeSouza profile image
by Sarah DeSouza

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More