Webinar "Reducing NLP Inference costs through model specialisation"
This talk will discuss ways to reduce costs for NLP inference through a better choice of model, hardware, and model compression techniques.
Data Phoenix team invites you all to our upcoming "AI Project Spotlight" webinar that’s going to take place on July 6 at 7 pm CET / 10 am PST.
- Topic: Reducing NLP Inference Costs through model specialisation
- Speaker: Meryem Arik, Co-founder of TitanML
- Participation: free (but you’ll be required to register)
ABOUT THE SPEAKER AND TOPIC
NLP inference can be very expensive, requiring access to powerful GPUs. In this talk, Meryem discusses ways to reduce this cost by over 90% through better choice of model, hardware, and model compression techniques. This is an essential talk to go to for anyone looking to put NLP models into production.
Meryem Arik
Meryem is the co-founder of TitanML - TitanML is an NLP development platform that focuses on deployability of LLMs - allowing businesses to build smaller and cheaper deployments of language models with ease. The TitanML platform automates much of the difficult MLOps and Inference Optimisation science to allow businesses to build and deploy state-of-the-art language models with ease.