The next-generation release of PyTorch 2.0 surprises with more speed, pythonization, and dynamism

At the recent PyTorch conference, the team introduced PyTorch 2.0, which offers the same development mode and user experience, but significantly changes and improves the PyTorch experience at the compiler level.

Maryna Marchuk
Maryna Marchuk

The PyTorch team has announced PyTorch 2.0, which is a significant update to the PyTorch machine learning library. The new version includes several improvements such as performance improvements, memory management, and hardware support.

PyTorch 2.0 is a next-generation release of over 4,541 commits and 428 contributors since version 1.13.1. It includes a stable version of Accelerated Transformers (formerly called Better Transformers), and the beta version includes torch.compile as the main API for PyTorch 2.0, which wraps the model and returns the compiled model. As the technology behind torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on the OpenAI Triton deep learning compiler to generate performance code and hide low-level hardware details.

The API is integrated into torch.compile(), and model developers can also use scaled point product attention kernels directly by calling the new scaled_dot_product_attention() statement. The Metal Performance Shaders (MPS) backend provides GPU-accelerated PyTorch learning on Mac platforms, with additional support for the 60 most used operations to cover more than 300 operators. And Amazon AWS optimizes PyTorch CPU output on AWS Graviton3-based C7g instances.

Overall, PyTorch 2.0 is a significant update to the PyTorch library, offering new features and improvements that make it easier and more efficient to use.