Welcome to this week's edition of Data Phoenix Digest! This newsletter keeps you up-to-date on the news in our community and summarizes the top research papers, articles, and news, to keep you track of trends in the Data & AI world!
Be active in our community and join our Slack to discuss the latest news of our community, top research papers, articles, events, jobs, and more...
Click here for details.
Data Phoenix's upcoming webinars:
GPT on a Leash: Evaluating LLM-based Apps & Mitigating Their Risks
The task of testing and evaluating AI systems is extremely challenging, especially when it involves text and unstructured data. In the case of LLM-based applications, these challenges are magnified by the fact that there isn't "one correct answer" and by a combination of various external constraints such as topics that shouldn't be discussed.
Democratizing AI Deployment
Deploying AI models remains a very complex problem, due to a deeply fragmented stack of incompatible tooling. The stack can be broken down into four areas: the orchestration layer, compression techniques, compiler infrastructure, and the hardware. We will cover each of these areas, and their unique challenges. When formulating optimal deployment recipes, each of these layers in the stack needs to be considered in a holistic manner, and decisions at one level affect the decision at every other level. At Unify, we're building the world's only fully open catalogue of AI tools, both open source and proprietary, and we're developing novel ways of combining these tools to create optimal deployment solutions for a variety of different use cases. During the talk, we will discuss the different kinds of decisions needed when deploying in different contexts. We will then also present demos of lightning fast applications, which are possible due to this unique holistic approach.
- How to Instruction Tune a Base Language Model
Harpreet Sahota (Developer Relations Manager at Deci) / December 7
- A Whirlwind Tour of Machine Learning Monitoring Techniques
Ramon Perez (Developer Advocate at Seldon) / December 19
Summary of the top articles and papers
Retrieval-Augmented Generation (RAG): From Theory to LangChain Implementation
What do you know about RAG? Stay tuned and read this article! It explains the concept of RAG and its theory, and showcases a simple RAG pipeline implementation with LangChain for orchestration, OpenAI language models, and a Weaviate vector database.
All You Need to Know to Develop Using Large Language Models
This article for software developers, data scientists and AI enthusiasts explains in simple terms the key technologies necessary to start developing LLM-based applications. Dive in to learn the basics of LLM development, from theory to practice.
Serverless Compute for LLM — With a Step-by-Step Guide for Hosting Mistral 7B on AWS Lambda
One challenge in deploying LLMs in production is identifying the optimal cloud hosting solution. Given the expense of GPUs, maintaining a traditional web server equipped with such a model can lead to significant costs. Here’s how serverless compute can help.
Accelerating PyTorch Training Workloads with FP8
This article explores the latest advancements in AI hardware: the integration of 8-bit floating-point (FP8) tensor processing cores in such architectures as Nvidia Hopper and Habana Gaudi2. Learn how they boost computational efficiency in AI training and inference.
Papers & projects
Mustango: Toward Controllable Text-to-Music Generation
Mustango is a music-domain-knowledge-inspired text-to-music system based on diffusion, that expands the Tango text-to-audio model. Mustango controls the generated music, not only with general text captions, but from more rich captions that include specific instructions related to chords, beats, tempo, and key. Learn more about it!
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
MEDITRON is a suite of open-source LLMs with 7B and 70B parameters adapted to the medical domain. MEDITRON builds on Llama-2 and extends pretraining on a curated medical corpus, including selected PubMed articles, abstracts, and internationally recognized medical guidelines. Try it out for yourself!
Adversarial Diffusion Distillation
Adversarial Diffusion Distillation (ADD) is a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1-4 steps while maintaining high image quality. It uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal to ensure high image fidelity. Worth taking a look!
DemoFusion: Democratising High-Resolution Image Generation With No $$$
DemoFusion is a novel framework that extends open-source GenAI models, employing Progressive Upscaling, Skip Residual, and Dilated Sampling mechanisms to achieve higher-resolution image generation. DemoFusion requires more passes, but the intermediate results can serve as "previews", facilitating rapid prompt iteration.
DataPhoenix is free today. Do you enjoy our digests and webinars? Value our AI coverage? Your support as a paid subscriber helps us continue our mission of delivering top-notch AI insights. Join us as a paid subscriber in shaping the future of AI with the DataPhoenix community.