Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
Data Phoenix Digest - ISSUE 7.2023

Data Phoenix Digest - ISSUE 7.2023

Upcoming webinars about unlocking data value with LLM and reducing NLP Inference costs, testing ML models for production, how to train your own LLM, 3D generation on ImageNet, universal guidance for diffusion models, CompressGPT, FriendlyCore, S-NeRF, QA4RE, and more.

Dmitry Spodarets profile image
by Dmitry Spodarets

Hey folks,

Welcome to this week's edition of Data Phoenix Digest! This newsletter keeps you up-to-date on the news in our community and summarizes the top research papers, articles, and news, to keep you track of trends in the Data & AI world!

Be active in our community and join our Slack to discuss the latest news of our community, top research papers, articles, events, jobs, and more...

📣
Want to promote your company, conference, job, or event to the Data Phoenix community of Data & AI researchers and engineers? Click here for details.

Data Phoenix community news

Upcoming webinars:

Video records of past events:

📣
Get ready for some thrilling updates on our Slack! By becoming a member, you'll be entered for a chance to win one of three copies of the book, "Experimentation for Engineers." But that's not all! As a special bonus, we're also offering an exclusive 35% discount code (bldataphoenix23) on all Manning Publications products in any format. Don't let this incredible opportunity slip away!

Summary of the top papers and articles

Articles

Machine Learning/Artificial Intelligence Testing for Production
Most AI/ML models are now used in automation and adding decision-making but how do we know the ML model is reliable for decision-making? In this article, you will learn about a novel method for testing AI/ML models for production, including the testing workflow, metrics and tools used, and more. Take a look!

How to train your own Large Language Models
LLMs have made a significant impact in the field of AI, but most companies currently lack the ability to train these models themselves, relying instead on a few major tech firms as providers. Replit has made significant investments in developing the infrastructure necessary to train their own LLMs from scratch. In this blog post, they explain how they did it.

CompressGPT: Decrease Token Usage by ~70%
It is possible to increase the effective context window for GPT-* by asking the LLM to compress a prompt, and then feed it into another instance of the same model. But it leads to some critical issues. The author explains how to address them in an efficient manner.

FriendlyCore: A novel differentially private aggregation framework
Differential privacy (DP) ML algorithms protect user data by limiting the effect of each data point on an aggregated output with a mathematical guarantee. However, DP algorithms tend to be less accurate than their non-private counterparts. Find out how to solve this problem!

Distributed Hyperparameter Tuning in Vertex AI Pipeline
Vertex AI pipelines offer a handy way to implement end-to-end ML workflows with extremely low effort. This comprehensive article presents a new way to enable the distributed hyperparameter tuning in GCP Vertex AI pipeline. Learn more!

Papers & projects

3D Generation on ImageNet
In this paper, the authors develop a 3D generator with Generic Priors (3DGP): a 3D synthesis framework with more general assumptions about the training data, and show that it scales to challenging datasets, like ImageNet. It is based on three new ideas. Learn them!

3D-aware Conditional Image Synthesis
This paper describes a 3D-aware conditional generative model for controllable photorealistic image synthesis. It integrates 3D representations with conditional generative modeling, i.e., enabling controllable high-resolution 3D-aware rendering by conditioning on user inputs.

Universal Guidance for Diffusion Models
Typical diffusion models cannot be conditioned on other modalities without retraining. This work presents a universal guidance algorithm that enables diffusion models to be controlled by arbitrary guidance modalities without the need to retrain any use-specific components.

S-NeRF: Neural Radiance Fields for Street Views
In this paper, the authors propose a new street-view NeRF (S-NeRF) that considers novel view synthesis of both the large-scale background scenes and the foreground moving vehicles jointly. Learn more about their approach and the results of experiments!

Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
QA4RE is a framework that aligns RE with question answering (QA), a predominant task in instruction-tuning datasets. It enables LLMs to outperform strong zero-shot baselines by a large margin. This work illustrates a promising way of adapting LLMs to challenging tasks by aligning these tasks with more common instruction-tuning tasks like QA.

🤗
If you enjoy our work, we would greatly appreciate your support by sharing our digest with your friends on Twitter, LinkedIn, or Facebook using the hashtag #dataphoenix. Your help in reaching a wider audience is invaluable to us!
Dmitry Spodarets profile image
by Dmitry Spodarets

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More