Data Phoenix Digest - ISSUE 43

How to process a dataframe with millions of rows in seconds, calculating derivatives in PyTorch, BERT’s cousin for advanced topic modeling, understanding DBSCAN, convolutional Xformers for vision, transformers in medical imaging, UniFormer, HumanNeRF, Stanford CoreNLP, podcasts, and more ...

Dmitry Spodarets
Dmitry Spodarets

NEWS

ARTICLES

Multi-threading and Multi-processing in Python
In this tutorial, we’ll take a deep dive into multi-threading and multi-processing with Python and learn how they are related to concurrency and parallelism.

On the Difficulty of Extrapolation with NN Scaling
Today, hyperparameter tuning can cost millions. In this post, we’ll walk you through an example showing how to train large models cost-efficiently.

How to Process a DataFrame with Millions of Rows in Seconds?
Terality is a serverless data processing engine that processes the data in the cloud. In this article, you’ll learn how to use it to process and handle large-scale data frames.

Calculating Derivatives in PyTorch
Derivatives are the foundation of calculus. In this tutorial, you’ll learn how to use it in various ways in PyTorch, from calculating derivatives to implementing specific, multi-value functions.

Meet BERTopic— BERT’s Cousin For Advanced Topic Modeling
BERTopic is a topic modeling technique that leverages transformers and c-TF-IDF. In this post, you’ll learn how to use it and Transformers for automatic topic discovery.

Torch Hub Series #6: Image Segmentation
In this tutorial, you will learn the concept behind Fully Convolutional Networks (FCNs) for segmentation. Check out the entire series to get more information on the topic.

Understanding DBSCAN and Implementation with Python
In this post, you’ll learn what DBSCAN is, briefly go over the major ideas of DBSCAN, and the ways of its implementation in Python. Learn more!

PAPERS

Deep Physical Neural Networks Trained with Backpropagation
The authors introduce a hybrid in situ–in silico algorithm (physics-aware training) that applies backpropagation to train controllable physical systems. The approach is universal.

Convolutional Xformers for Vision
Vision transformers (ViTs) are used rarely. In this paper, the authors propose a linear attention-convolution hybrid architecture to make them more cost- and resource-efficient.

Transformers in Medical Imaging: A Survey
In this survey, the researchers provide a comprehensive review of the applications of Transformers in medical imaging, ranging from recently proposed architectural designs to unsolved issues.

UniFormer: Unifying Convolution and Self-attention for Visual Recognition
UniFormer is a novel Unified transFormer that can seamlessly integrate the merits of convolution and self-attention in a concise transformer format. Learn more!

PROJECTS

HumanNeRF: Free-viewpoint Rendering of Moving Peoplefrom Monocular Video
HumanNeRF is a free-viewpoint rendering method that works on a given monocular video of a human performing complex body motions, e.g. a video from YouTube.

PODCASTS

Matrix Profiles in Stumpy
In this podcast, Sean Law, Principle Data Scientist, R&D at a Fortune 500 Company, talks about his creation of the STUMPY Python Library. Worth listening!

TOOLS

Stanford CoreNLP
Stanford CoreNLP is a set of natural language analysis tools. It provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, and Spanish.

Digest

Comments