Data Phoenix Digest - ISSUE 1.2024

Welcome to this week's edition of Data Phoenix Digest! This newsletter keeps you up-to-date on the news in our community and summarizes the top research papers, articles, and news, to keep you track of trends in the Data & AI world!

Be active in our community and join our Slack to discuss the latest news of our community, top research papers, articles, events, jobs, and more...

📣
Want to promote your company product, event, or job to the Data Phoenix community of Data & AI researchers and engineers?
Click here for details.

Data Phoenix's upcoming webinar:

A Whirlwind Tour of ML Model Serving Strategies (Including LLMs)
There are many recipes to serve machine learning models to end users today, and even though new ways keep popping up as time passes, some questions remain: How do we pick the appropriate serving recipe from the menu we have available, and how can we execute it as fast and efficiently as possible? In this talk, we’re going to go through a whirlwind tour of the different machine learning deployment strategies available today for both traditional ML systems and Large Language Models, and we’ll also touch on a few do’s and don’ts while we’re at it. This session will be jargonless but not buzzwordy- or meme-less.


💡
Follow us on LinkedInX, and YouTube to stay updated with our community events and the latest AI & Data industrial news.

Summary of the top articles and papers

Articles

LoRA Training Scripts of the World, Unite!
This article presents a step-by-step guide on how the Pivotal Tuning technique used on Replicate's SDXL Cog trainer with the Prodigy optimizer used in the Kohya trainer can help achieve excellent results on training Dreambooth LoRAs for SDXL.

Fine-tune a Mistral-7b model with Direct Preference Optimization
Fine-tuned LLMs can be biased, toxic, and harmful. They need Reinforcement Learning from Human Feedback (RLHF) to perform better. This article explains how a RLHF technique, Direct Preference Optimization, can be used to create NeuralHermes-2.5.

How to Detect Hallucinations in LLMs
LLM hallucinations are a serious problem. It seems that you can either trust LLM outputs or fact check every output. In this article, the author presents an interesting approach to teaching LLMs to say “I don’t know” when they actually don’t know the information as factual.

Harmonizing Multi-GPUs: Efficient Scaling of LLM Inference
What happens when your models are too big or the request traffic too high? This article delves in detail into how large, massively parallel clusters of GPUs can be used to effectively scale LLM inferences in a cost-efficient manner. Give it a thorough read!

uVitals – An Anomaly Detection & Alerting System
Have you ever wondered how Uber’s anomaly detection works? Meet uVitals! uVitals is an Anomaly Detection and Alerting system by Uber that specializes in detecting anomalies in multi-dimensional time series data sets. Check out this article to learn more!

Papers & projects

On Noisy Evaluation in Federated Hyperparameter Tuning
Hyperparameter tuning is critical to the success of cross-device federated learning apps, but their scale, heterogeneity, and privacy are also a problem. In this paper, the authors present a simple and effective approach to boost the evaluation signal and resolve these challenges.

StreamDiffusion: A Pipeline-level Solution for Real-time Interactive Generation
StreamDiffusion is a real-time diffusion pipeline designed for interactive image generation. It transforms the original sequential denoising into the batching denoising process by parallelizing the streaming process. Check out the results of evaluation!

Tracking Any Object Amodally
The TAO-Amodal benchmark is a dataset that includes amodal and modal bounding boxes for visible and occluded objects. It features 880 diverse categories in thousands of video sequences, helping to address the scarcity of amodal data. See it in action! 

Amphion: An Open-Source Audio, Music and Speech Generation Toolkit
Amphion is a toolkit for Audio, Music, and Speech Generation that supports reproducible research and helps junior researchers and engineers get started in the field of audio, music, and speech generation research and development. Check out its unique features!

AnyText: Multilingual Visual Text Generation And Editing
AnyText is a diffusion-based multilingual visual text generation and editing model that focuses on rendering accurate and coherent text in the image. AnyText can write characters in multiple languages and is the first work to address multilingual visual text generation.

MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices
MobileBrick is a novel data capturing and 3D annotation pipeline to obtain precise 3D ground-truth shapes without relying on expensive 3D scanners. The MobileBrick dataset provides a unique opportunity for future research on high-quality 3D reconstruction.

FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
FlexGen is a high-throughput generation engine for running LLMs with limited GPU memory. It can be flexibly configured under various hardware resource constraints by aggregating memory and computation from the GPU, CPU, and disk. Learn more about it!

Datasets

Objaverse: A Universe of Annotated 3D Objects
Objaverse 1.0 is a large dataset of objects with 800K+ 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category.


DataPhoenix is free today. Do you enjoy our digests and webinars? Value our AI coverage? Your support as a paid subscriber helps us continue our mission of delivering top-notch AI insights. Join us as a paid subscriber in shaping the future of AI with the DataPhoenix community.