Supercharging Large Language Model training using Metaflow and the Cloud

12:35pm - 1:08pm on Saturday, October 7

Valay Dave, Utkarsh Kashyap


In the current landscape of Natural Language Processing (NLP), Large Language Models (LLMs) like GPT-4 are leading the charge, necessitating research into scalable, efficient training techniques accessible to the majority. This rise of LLMs has heralded parameter-efficient training techniques such as Low-Rank Adaptation (LoRA), which, while promising, present a complex challenge in their application and experimentation. This presentation proposes a consistent, effective pattern for finetuning LLMs reliably, leveraging Metaflow, a human- friendly ML infrastructure library, to showcase this pattern in action.

Want to edit this page?