Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Webinar "Reducing NLP Inference costs through model specialisation"

This talk will discuss ways to reduce costs for NLP inference through a better choice of model, hardware, and model compression techniques.

Dmitry Spodarets profile image
by Dmitry Spodarets
Webinar "Reducing NLP Inference costs through model specialisation"

Data Phoenix team invites you all to our upcoming "AI Project Spotlight" webinar that’s going to take place on July 6 at 7 pm CET / 10 am PST.

  • Topic: Reducing NLP Inference Costs through model specialisation
  • Speaker: Meryem Arik, Co-founder of TitanML
  • Participation: free (but you’ll be required to register)

ABOUT THE SPEAKER AND TOPIC

NLP inference can be very expensive, requiring access to powerful GPUs. In this talk, Meryem discusses ways to reduce this cost by over 90% through better choice of model, hardware, and model compression techniques. This is an essential talk to go to for anyone looking to put NLP models into production.

Meryem Arik

Meryem is the co-founder of TitanML - TitanML is an NLP development platform that focuses on deployability of LLMs - allowing businesses to build smaller and cheaper deployments of language models with ease. The TitanML platform automates much of the difficult MLOps and Inference Optimisation science to allow businesses to build and deploy state-of-the-art language models with ease.

Dmitry Spodarets profile image
by Dmitry Spodarets
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More